Liferay 7 CE/Liferay DXP Scheduled Tasks

Technical Blogs February 6, 2017 By David H Nebinger

In Liferay 6.x, scheduled tasks were kind of easy to implement.

I mean, you'd implement a class that implements the Liferay Message Bus's MessageListener interface and then add the details in the <scheduler-entry /> sections in your liferay-portlet.xml file and you'd be off to the races.

Well, things are not so simple with Liferay 7 CE / Liferay DXP. In fact, I couldn't find a reference anywhere on dev.liferay.com, so I thought I'd whip up a quick blog on them.

Of course I'm going to pursue this as an OSGi-only solution.

StorageType Information

The first thing we need to know before we schedule a job, we should first discuss the supported StorageTypes. Liferay has three supported StorageTypes:

  • StorageType.MEMORY_CLUSTERED - This is the default storage type, one that you'll typically want to shoot for. This storage type combines two aspects, MEMORY and CLUSTERED. For MEMORY, that means the job information (next run, etc.) are only held in memory and are not persisted anywhere. For CLUSTERED, that means the job is cluster-aware and will only run on one node in the cluster.
  • StorageType.MEMORY - For this storage type, no job information is persisted. The important part here is that you may miss some job runs in cases of outages. For example, if you have a job to run on the 1st of every month but you have a big outage and the server/cluster is down on the 1st, the job will not run. And unlike in PERSISTED, when the server comes up the job will not run even though it was missed. Note that this storage type is not cluster-aware, so your job will run on every node in the cluster which could cause duplicate runs.
  • StorageType.PERSISTED - This is the opposite of MEMORY as job details will be persisted in the database. For the missed job above, when the server comes up on the 2nd it will realize the job was missed and will immediately process the job. Note that this storage type relies on cluster-support facilities in the storage engine (Quartz's implementation discussed here: http://www.quartz-scheduler.org/documentation/quartz-2.x/configuration/ConfigJDBCJobStoreClustering.html).

So if you're in a cluster, you'll want to stick with either MEMORY_CLUSTERED or PERSISTED to ensure your job doesn't run on every node (i.e. you're running a report to generate a PDF and email, you wouldn't want your 4 node cluster doing the report 4 times and emailing 4 copies). You may want to stick with the MEMORY type when you have, say, an administrative task that needs to run regularly on all nodes in your cluster.

Choosing between MEMORY[_CLUSTERED] and PERSISTED is how resiliant you need to be in the case of missed job fire times. For example, if that monthly report is mission critical, you might want to elect for PERSISTED to ensure the report goes out as soon as the cluster is back up and ready to pick up the missed job. However, if they are not mission critical it is easier to stick with one of the MEMORY options.

Finally, even if you're not currently in a cluster, I would encourage you to make choices as if you were running in a cluster right from the beginning. The last thing you want to have to do when you start scaling up your environment is trying to figure out why some previous regular tasks are not running as they used to when you had a single server. 

Adding StorageType To SchedulerEntry

We'll be handling our scheduling shortly, but for now we'll worry about the SchedulerEntry. The SchedulerEntry object contains most of the details about the scheduled task to be defined, but it does not have details about the StorageType. Remember that MEMORY_CLUSTERED is the default, so if you're going to be using that type, you can skip this section. But to be consistent, you can still apply the changes in this section even for the MEMORY_CLUSTERED type.

To add StorageType details to our SchedulerEntry, we need to make our SchedulerEntry implementation class implement the com.liferay.portal.kernel.scheduler.ScheduleTypeAware interface. When Liferay's scheduler implementation classes are identifying the StorageType to use, it starts with MEMORY_CLUSTERED and will only use another StorageType if the SchedulerEntry implements this interface.

So let's start by defining a SchedulerEntry wrapper class that implements the SchedulerEntry interface as well as the StorageTypeAware interface:

public class StorageTypeAwareSchedulerEntryImpl extends SchedulerEntryImpl implements SchedulerEntry, StorageTypeAware {

  /**
   * StorageTypeAwareSchedulerEntryImpl: Constructor for the class.
   * @param schedulerEntry
   */
  public StorageTypeAwareSchedulerEntryImpl(final SchedulerEntryImpl schedulerEntry) {
    super();

    _schedulerEntry = schedulerEntry;

    // use the same default that Liferay uses.
    _storageType = StorageType.MEMORY_CLUSTERED;
  }

  /**
   * StorageTypeAwareSchedulerEntryImpl: Constructor for the class.
   * @param schedulerEntry
   * @param storageType
   */
  public StorageTypeAwareSchedulerEntryImpl(final SchedulerEntryImpl schedulerEntry, final StorageType storageType) {
    super();

    _schedulerEntry = schedulerEntry;
    _storageType = storageType;
  }

  @Override
  public String getDescription() {
    return _schedulerEntry.getDescription();
  }

  @Override
  public String getEventListenerClass() {
    return _schedulerEntry.getEventListenerClass();
  }

  @Override
  public StorageType getStorageType() {
    return _storageType;
  }

  @Override
  public Trigger getTrigger() {
    return _schedulerEntry.getTrigger();
  }

  public void setDescription(final String description) {
    _schedulerEntry.setDescription(description);
  }
  public void setTrigger(final Trigger trigger) {
    _schedulerEntry.setTrigger(trigger);
  }
  public void setEventListenerClass(final String eventListenerClass) {
    _schedulerEntry.setEventListenerClass(eventListenerClass);
  }
  
  private SchedulerEntryImpl _schedulerEntry;
  private StorageType _storageType;
}

Now you can use this class to wrap a current SchedulerEntryImpl yet include the StorageTypeAware implementation.

Defining The Scheduled Task

NOTE: If you're using DXP FixPack 14 or later or Liferay 7 CE GA4 or later, jump down to the end of the blog post for a necessary change due to the deprecation of the BaseSchedulerEntryMessageListener class.

We have all of the pieces now to build out the code for a scheduled task in Liferay 7 CE / Liferay DXP:

@Component(
  immediate = true, property = {"cron.expression=0 0 0 * * ?"},
  service = MyTaskMessageListener.class
)
public class MyTaskMessageListener extends BaseSchedulerEntryMessageListener {

  /**
   * doReceive: This is where the magic happens, this is where you want to do the work for
   * the scheduled job.
   * @param message This is the message object tied to the job.  If you stored data with the
   *                job, the message will contain that data.   
   * @throws Exception In case there is some sort of error processing the task.
   */
  @Override
  protected void doReceive(Message message) throws Exception {

    _log.info("Scheduled task executed...");
  }

  /**
   * activate: Called whenever the properties for the component change (ala Config Admin)
   * or OSGi is activating the component.
   * @param properties The properties map from Config Admin.
   * @throws SchedulerException in case of error.
   */
  @Activate
  @Modified
  protected void activate(Map<String,Object> properties) throws SchedulerException {

    // extract the cron expression from the properties
    String cronExpression = GetterUtil.getString(properties.get("cron.expression"), _DEFAULT_CRON_EXPRESSION);

    // create a new trigger definition for the job.
    String listenerClass = getEventListenerClass();
    Trigger jobTrigger = _triggerFactory.createTrigger(listenerClass, listenerClass, new Date(), null, cronExpression);

    // wrap the current scheduler entry in our new wrapper.
    // use the persisted storaget type and set the wrapper back to the class field.
    schedulerEntryImpl = new StorageTypeAwareSchedulerEntryImpl(schedulerEntryImpl, StorageType.PERSISTED);

    // update the trigger for the scheduled job.
    schedulerEntryImpl.setTrigger(jobTrigger);

    // if we were initialized (i.e. if this is called due to CA modification)
    if (_initialized) {
      // first deactivate the current job before we schedule.
      deactivate();
    }

    // register the scheduled task
    _schedulerEngineHelper.register(this, schedulerEntryImpl, DestinationNames.SCHEDULER_DISPATCH);

    // set the initialized flag.
    _initialized = true;
  }

  /**
   * deactivate: Called when OSGi is deactivating the component.
   */
  @Deactivate
  protected void deactivate() {
    // if we previously were initialized
    if (_initialized) {
      // unschedule the job so it is cleaned up
      try {
        _schedulerEngineHelper.unschedule(schedulerEntryImpl, getStorageType());
      } catch (SchedulerException se) {
        if (_log.isWarnEnabled()) {
          _log.warn("Unable to unschedule trigger", se);
        }
      }

      // unregister this listener
      _schedulerEngineHelper.unregister(this);
    }
    
    // clear the initialized flag
    _initialized = false;
  }

  /**
   * getStorageType: Utility method to get the storage type from the scheduler entry wrapper.
   * @return StorageType The storage type to use.
   */
  protected StorageType getStorageType() {
    if (schedulerEntryImpl instanceof StorageTypeAware) {
      return ((StorageTypeAware) schedulerEntryImpl).getStorageType();
    }
    
    return StorageType.MEMORY_CLUSTERED;
  }
  
  /**
   * setModuleServiceLifecycle: So this requires some explanation...
   * 
   * OSGi will start a component once all of it's dependencies are satisfied.  However, there
   * are times where you want to hold off until the portal is completely ready to go.
   * 
   * This reference declaration is waiting for the ModuleServiceLifecycle's PORTAL_INITIALIZED
   * component which will not be available until, surprise surprise, the portal has finished
   * initializing.
   * 
   * With this reference, this component activation waits until portal initialization has completed.
   * @param moduleServiceLifecycle
   */
  @Reference(target = ModuleServiceLifecycle.PORTAL_INITIALIZED, unbind = "-")
  protected void setModuleServiceLifecycle(ModuleServiceLifecycle moduleServiceLifecycle) {
  }

  @Reference(unbind = "-")
  protected void setTriggerFactory(TriggerFactory triggerFactory) {
    _triggerFactory = triggerFactory;
  }

  @Reference(unbind = "-")
  protected void setSchedulerEngineHelper(SchedulerEngineHelper schedulerEngineHelper) {
    _schedulerEngineHelper = schedulerEngineHelper;
  }

  // the default cron expression is to run daily at midnight
  private static final String _DEFAULT_CRON_EXPRESSION = "0 0 0 * * ?";

  private static final Log _log = LogFactoryUtil.getLog(MyTaskMessageListener.class);

  private volatile boolean _initialized;
  private TriggerFactory _triggerFactory;
  private SchedulerEngineHelper _schedulerEngineHelper;
}

So the code here is kinda thick, but I've documented it as fully as I can.

The base class, BaseSchedulerEntryMessageListener, is a common base class for all schedule-based message listeners. It is pretty short, so you are encouraged to open it up in the source and peruse it to see what few services it provides.

The bulk of the code you can use as-is. You'll probably want to come up with your own default cron expression constant and property so you're not running at midnight (and that's midnight GMT, cron expressions are always based on the timezone your app server is configured to run on).

And you'll certainly want to fill out the doReceive() method to actually build your scheduled task logic.

One More Thing...

One thing to keep in mind, especially with the MEMORY and MEMORY_CLUSTERED storage types: Liferay does not do anything to prevent running the same jobs multiple times.

For example, say you have a job that takes 10 minutes to run, but you schedule it to run every 5 minutes. There's no way the job can complete in 5 minutes, so multiple jobs start piling up. Sure there's a pool backing the implementation to ensure the system doesn't run away and die on you, but even that might lead to disasterous results.

So take care in your scheduling. Know what the worst case scenario is for timing your jobs and use that information to define a schedule that will work even in this situation.

You may even want to consider some sort of locking or semaphore mechanism to prevent the same job running in parallel at all.

Just something to keep in mind...

Conclusion

So this is how all of those scheduled tasks from liferay-portlet.xml get migrated into the OSGi environment. Using this technique, you now have a migration path for this aspect of your legacy portlet code.

Update 05/18/2017

So I was contacted today about the use of the BaseSchedulerEntryMessageListener class as the base class for the message listener. Apparently this class has become deprecated as of DXP FP 13 as well as the upcoming GA4 release.

The only guidance I was given for updating the code happens to be the same guidance that I give to most folks wanting to know how to do something in Liferay - find an example in the Liferay source.

After reviewing various Liferay examples, we will need to change the parent class for our implementation and modify the activation code.

So now our message listener class is:

@Component(
  immediate = true, property = {"cron.expression=0 0 0 * * ?"},
  service = MyTaskMessageListener.class
)
public class MyTaskMessageListener extends BaseMessageListener {

  /**
   * doReceive: This is where the magic happens, this is where you want to do the work for
   * the scheduled job.
   * @param message This is the message object tied to the job.  If you stored data with the
   *                job, the message will contain that data.   
   * @throws Exception In case there is some sort of error processing the task.
   */
  @Override
  protected void doReceive(Message message) throws Exception {

    _log.info("Scheduled task executed...");
  }

  /**
   * activate: Called whenever the properties for the component change (ala Config Admin)
   * or OSGi is activating the component.
   * @param properties The properties map from Config Admin.
   * @throws SchedulerException in case of error.
   */
  @Activate
  @Modified
  protected void activate(Map<String,Object> properties) throws SchedulerException {

    // extract the cron expression from the properties
    String cronExpression = GetterUtil.getString(properties.get("cron.expression"), _DEFAULT_CRON_EXPRESSION);

    // create a new trigger definition for the job.
    String listenerClass = getClass().getName();
    Trigger jobTrigger = _triggerFactory.createTrigger(listenerClass, listenerClass, new Date(), null, cronExpression);

    // wrap the current scheduler entry in our new wrapper.
    // use the persisted storaget type and set the wrapper back to the class field.
    _schedulerEntryImpl = new SchedulerEntryImpl(getClass().getName(), jobTrigger);
    _schedulerEntryImpl = new StorageTypeAwareSchedulerEntryImpl(_schedulerEntryImpl, StorageType.PERSISTED);

    // update the trigger for the scheduled job.
    _schedulerEntryImpl.setTrigger(jobTrigger);

    // if we were initialized (i.e. if this is called due to CA modification)
    if (_initialized) {
      // first deactivate the current job before we schedule.
      deactivate();
    }

    // register the scheduled task
    _schedulerEngineHelper.register(this, _schedulerEntryImpl, DestinationNames.SCHEDULER_DISPATCH);

    // set the initialized flag.
    _initialized = true;
  }

  /**
   * deactivate: Called when OSGi is deactivating the component.
   */
  @Deactivate
  protected void deactivate() {
    // if we previously were initialized
    if (_initialized) {
      // unschedule the job so it is cleaned up
      try {
        _schedulerEngineHelper.unschedule(_schedulerEntryImpl, getStorageType());
      } catch (SchedulerException se) {
        if (_log.isWarnEnabled()) {
          _log.warn("Unable to unschedule trigger", se);
        }
      }

      // unregister this listener
      _schedulerEngineHelper.unregister(this);
    }
    
    // clear the initialized flag
    _initialized = false;
  }

  /**
   * getStorageType: Utility method to get the storage type from the scheduler entry wrapper.
   * @return StorageType The storage type to use.
   */
  protected StorageType getStorageType() {
    if (_schedulerEntryImpl instanceof StorageTypeAware) {
      return ((StorageTypeAware) _schedulerEntryImpl).getStorageType();
    }
    
    return StorageType.MEMORY_CLUSTERED;
  }
  
  /**
   * setModuleServiceLifecycle: So this requires some explanation...
   * 
   * OSGi will start a component once all of it's dependencies are satisfied.  However, there
   * are times where you want to hold off until the portal is completely ready to go.
   * 
   * This reference declaration is waiting for the ModuleServiceLifecycle's PORTAL_INITIALIZED
   * component which will not be available until, surprise surprise, the portal has finished
   * initializing.
   * 
   * With this reference, this component activation waits until portal initialization has completed.
   * @param moduleServiceLifecycle
   */
  @Reference(target = ModuleServiceLifecycle.PORTAL_INITIALIZED, unbind = "-")
  protected void setModuleServiceLifecycle(ModuleServiceLifecycle moduleServiceLifecycle) {
  }

  @Reference(unbind = "-")
  protected void setTriggerFactory(TriggerFactory triggerFactory) {
    _triggerFactory = triggerFactory;
  }

  @Reference(unbind = "-")
  protected void setSchedulerEngineHelper(SchedulerEngineHelper schedulerEngineHelper) {
    _schedulerEngineHelper = schedulerEngineHelper;
  }

  // the default cron expression is to run daily at midnight
  private static final String _DEFAULT_CRON_EXPRESSION = "0 0 0 * * ?";

  private static final Log _log = LogFactoryUtil.getLog(MyTaskMessageListener.class);

  private volatile boolean _initialized;
  private TriggerFactory _triggerFactory;
  private SchedulerEngineHelper _schedulerEngineHelper;
  private SchedulerEntryImpl _schedulerEntryImpl = null;
}

That's all there is to it, but it's best to avoid the deprecated class since you never know when deprecated will become disappeared...

Liferay/OSGi Annotations - What they are and when to use them

Technical Blogs February 1, 2017 By David H Nebinger

When you start reviewing Liferay 7 CE/Liferay DXP code, you run into a lot of annotations in a lot of different ways.  They can all seem kind of overwhelming when you first happen upon them, so I thought I'd whip up a little reference guide, kind of explaining what the annotations are for and when you might need to use them in your OSGi code.

So let's dive right in...

@Component

So in OSGi world this is the all important "Declarative Services" annotation defining a service implementation.  DS is an aspect of OSGi for declaring a service dynamically and has a slew of plumbing in place to allow other components to get wired to the component.

There are three primary attributes that you'll find for this annotation:

  • immediate - Often set to true, this will ensure the component is started right away and not wait for a reference wiring or lazy startup.
  • properties - Used to pass in a set of OSGi properties to bind to the component.  The component can see the properties, but more importantly other components will be able to see the properties too.  These properties help to configure the component but also are used to support filtering of components.
  • service - Defines the service that the component implements.  Sometimes this is optional, but often it is mandatory to avoid ambiguity on the service the component wants to advertise.  The service listed is often an interface, but you can also use a concrete class for the service.

When are you going to use it?  Whenever you create a component that you want or need to publish into the OSGi container.  Not all of your classes need to be components.  You'll declare a component when code needs to plug into the Liferay environment (i.e. add a product nav item, define an MVC command handler, override a Liferay component) or to plug into your own extension framework (see my recent blog on building a healthcheck system).

@Reference

This is the counterpart to the @Component annotation.  @Reference is used to get OSGi to inject a component reference into your component. This is a key thing here, since OSGi is doing the injection, it is only going to work on an OSGi @Component class.  @Reference annotations are going to be ignored in non-components, and in fact they are also ignored in subclasses too.  Any injected references you need, they must be done in the @Component class itself.

This is, of course, fun when you want to define a base class with a number of injected services; the base class does not get the @Component annotation (because it is not complete) and @Reference annotations are ignored in non-component classes, so the injection will never occur.  You end up copying all of the setters and @Reference annotations to all of the concrete subclasses and boy, does that get tedious.  But it is necessary and something to keep in mind.

Probably the most common attribute you're going to see here is the "unbind" attribute, and you'll often find it in the form of @Reference(unbind = "-") on a setter method. When you use a setter method with @Reference, OSGi will invoke the setter with the component to use, but the unbind attribute indicates that there is no method to call when the component is unbinding, so basically you're saying you don't handle components disappearing behind your back.  For the most part this is not a problem, server starts up, OSGi binds the component in and you use it happily until the system shuts down.

Another attribute you'll see here is target. Target is used as a filter mechanism; remember the properties covered in @Component? With the target attribute, you specify a query that identifies a more specific instance of a component that you'd like to receive.  Here's one example:

@Reference(
  target = "(javax.portlet.name=" + NotificationsPortletKeys.NOTIFICATIONS + ")",
  unbind = "-"
)
protected void setPanelApp(PanelApp panelApp) {
  _panelApp = panelApp;
}

The code here wants to be given an instance of a PanelApp component, but it's looking specifically for the PanelApp component tied to the notifications portlet.  Any other PanelApp component won't match the filter and won't be applied.

There are some attributes that you will sometimes find here that are pretty important, so I'm going to go into some details on those.

The first is the cardinality attribute.  The default value is ReferenceCardinality.MANDITORY, but other values are OPTIONAL, MULTIPLE, and AT_LEAST_ONE. The meanings of these are:

  • MANDITORY - The reference must be available and injected before this component will start.
  • OPTIONAL - The reference is not required for the component to start and will function w/o a component assignment.
  • MULTIPLE - Multiple resources may satisfy the reference and the component will take all of them, but like OPTIONAL the reference is not needed for the component to start.
  • AT_LEAST_ONE - Multiple resources may satisfy the reference and the component will take all of them, but at least one is manditory for the component to start.

The multiple options allow you to get multiple calls with references that match.  This really only makes sense if you are using the @Reference annotation on a setter method and, in the body of the method, were adding to a list or array.  Alternatives to this kind of thing would be to use a ServiceTracker so you wouldn't have to manage the list yourself.

The optional options allow your component to start without an assigned reference.  This kind of thing can be useful if you have a scenario where you have a circular reference issue: A references B which references C which references A.  If all three use REQUIRED, none will start because the references cannot be satisfied (only started components can be assigned as a reference).  You break the circle by having one component treat the reference as optional; then they will be able to start and references will be resolved.

The next important @Reference attribute is the policy.  Policy can be either ReferencePolicy.STATIC (the default) or ReferencePolicy.DYNAMIC.  The meanings of these are:

  • STATIC - The component will only be started when there is an assigned reference, and will not be notified of alternative services as they become available.
  • DYNAMIC - The component will start when there is reference(s) or not, and the component will accept new references as they become available.

The reference policy controls what happens after your component starts when new reference options become available.  For STATIC, new reference options are ignored and DYNAMIC your component is willing to change.

Along with the policy, another important @Reference attribute is the policyOption.  This attribute can be either ReferencePolicyOption.RELUCTANT (the default) or ReferencePolicyOption.GREEDY.  The meanings of these are:

  • RELUCTANT - For single reference cardinality, new reference potentials that become available will be ignored.  For multiple reference cardinality, new reference potentials will be bound.
  • GREEDY - As new reference potentials become available, the component will bind to them.

Whew, lots of options here, so let's talk about common groupings.

First is the default, ReferenceCardinality.MANDITORY, ReferencePolicy.STATIC and ReferencePolicyOption.RELUCTANT.  This summarizes down to your component must have only one reference service to start and regardless of new services that are started, your component is going to ignore them.  These are really good and normal defaults and promote stability for your component.

Another common grouping you'll find in the Liferay source is ReferenceCardinality.OPTIONAL or MULTIPLE, ReferencePolicy.DYNAMIC and ReferencePolicyOption.GREEDY.  In this configuration, the component will function with or without reference service(s), but the component allows for changing/adding references on the fly and wants to bind to new references when they are available.

Other combinations are possible, but you need to understand impacts to your component.  After all, when you declare a reference, you're declaring that you need some service(s) to make your component complete.  Consider how your component can react when there are no services, or what happens if your component stops because dependent service(s) are not available. Consider your perfect world scenario as well as a chaotic nightmare of redeployments, uninstalls, service gaps and identify how your component can weather the chaos.  If you can survive the chaos situation, you should be fine in the perfect world scenario.

Finally, when do you use the @Reference annotation?  When you need service(s) injected into your component from the OSGi environment.  These injections can come from your own module or from other modules in the OSGi container.  Remember that @Reference only works for OSGi components, but you can change into a component with an addition of the @Component reference.

@BeanReference

This is a Liferay annotation used to inject a reference to a Spring bean from the Liferay core.

@ServiceReference

This is a Liferay annotation used to inject a reference from a Spring Extender module bean.

Wait! Three Reference Annotations? Which should I use?

So there they are, the three different types of reference annotations.  Rule of thumb, most of the time you're going to want to just stick with the @Reference annotation.  The Liferay core Spring beans and Spring Extender module beans are also exposed as OSGi components, so @Reference should work most of the time.

If your @Reference isn't getting injected or is null, that will be sign that you should use one of the other reference annotations.  Here your choice is easy: if the bean is from the Liferay core, use @BeanReference, but if it is from a Spring Extender module, use the @ServiceReference annotation instead.  Note that both bean and service annotations will require your component use the Spring Extender also.  For setting this up, check out any of your ServiceBuilder service modules to see how to update the build.gradle and bnd.bnd file, etc.

@Activate

The @Activate annotation is OSGi's equivalent to Spring's InitializingBean interface.  It declares a method that will be invoked after the component has started.

In the Liferay source, you'll find it used with three primary method signatures:

@Activate
protected void activate() {
  ...
}
@Activate
protected void activate(Map<String, Object> properties) {
  ...
}
@Activate
protected void activate(BundleContext bundleContext, Map<String, Object> properties) {
  ...
}

There are other method signatures too, just search the Liferay source for @Activate and you'll find all of the different variations. Except for the no-argument activate method, they all depend on values injected by OSGi.  Note that the properties map is actually your properties from OSGi's Configuration Admin service.

When should you use @Activate? Whenever you need to complete some initialization tasks after the component is started but before it is used.  I've used it, for example, to set up and schedule Quartz jobs, verify database entities, etc.

@Deactivate

The @Deactivate annotation is the inverse of the @Activate annotation, it identifies a method that will be invoked when the component is being deactivated.

@Modified

The @Modified annotation marks the method that will be invoked when the component is modified, typically indicating that the @Reference(s) were changed.  In Liferay code, the @Modified annotation is typically bound to the same method as the @Activate annotation so the same method handles both activation and modification.

@ProviderType

The @ProviderType comes from BND and is generally considered a complex concern to wrap your head around.  Long story greatly over-simplified, the @ProviderType is used by BND to define the version ranges assigned in the OSGi manifest in implementors and tries to restrict the range to a narrow version difference.

The idea here is to ensure that when an interface changes, the narrow version range on implementors would force implementors to update to match the new version on the interface.

When to use @ProviderType? Well, really you don't need to. You'll see this annotation scattered all through your ServiceBuilder-generated code. It's included in this list not because you need to do it, but because you'll see it and likely wonder why it is there.

@ImplementationClassName

This is a Liferay annotation for ServiceBuilder entity interfaces. It defines the class from the service module that implements the interface.

This won't be an interface you need to use, but at least you'll know why its there.

@Transactional

This is another Liferay annotation bound to ServiceBuilder service interfaces. It defines the transaction requirements for the service methods.

This is another annotation you won't be expected to use.

@Indexable

The @Indexable annotation is used to decorate a method which should result in an index update, typically tied to ServiceBuilder methods that add, update or delete entities.

You use the @Indexable annotation on your service implementation methods that add, update or delete indexed entities.  You'll know if your entities are indexed if you have an associated com.liferay.portal.kernel.search.Indexer implementation for your entity.

@SystemEvent

The @SystemEvent annotation is tied to ServiceBuilder generated code which may result in system events.  System events work in concert with staging and the LAR export/import process.  For example, when a jouirnal article is deleted, this generates a SystemEvent record.  When in a staging environment and when the "Publish to Live" occurs, the delete SystemEvent ensures that the corresponding journal article from live is also deleted.

When would you use the @SystemEvent annotation? Honestly I'm not sure. With my 10 years of experience, I've never had to generate SystemEvent records or modify the publication or LAR process.  If anyone out there has had to use or modify an @SystemEvent annotation, I'd love to hear about your use case.

@Meta

OSGi has an XML-based system for defining configuration details for Configuration Admin.  The @Meta annotations from the BND project allow BND to generate the file based on the annotations used in the configuration interfaces.

Important Note: In order to use the @Meta annotations, you must add the following line to your bnd.bnd file:
-metatype: *
If you fail to add this, your @Meta annotations will not be used when generating the XML configuration file.

@Meta.OCD

This is the annotation for the "Object Class Definition" aspect, the container for the configuration details.  This annotation is used on the interface level to provide the id, name and localization details for the class definition.

When do you use this annotation? When you are defining a Configuration Admin interface that will have a panel in the System Settings control panel to configure the component.

Note that the @Meta.OCD attributes include localization settings.  This allows you to use your resource bundle to localize the configuration name, the field level details and the @ExtendedObjectClassDefinition category.

@Meta.AD

This is the annotation for the "Attribute Definition" aspect, the field level annotation to define the specification for the configuration element. The annotation is used to provide the ID, name, description, default value and other details for the field.

When do you use this annotation? To provide details about the field definition that will control how it is rendered within the System Setings configuration panel.

@ExtendedObjectClassDefinition

This is a Liferay annotation to define the category for the configuration (to identify the tab in the System Settings control panel where the configuration will be) and the scope for the configuration.

Scope can be one of the following:

  • SYSTEM - Global configuration for the entire system, will only be one configuration instance shared system wide.
  • COMPANY - Company-level configuration that will allow one configuration instance per company in the portal.
  • GROUP - Group-level (site) configuration that allows for site-level configuration instances.
  • PORTLET_INSTANCE - This is akin to portlet instance preferences for scope, there will be a separate configuration instance per portlet instance.

When will you use this annotation? Every time you use the @Meta.OCD annotation, you're going to use the @ExtendedObjectClassDefinition annotation to at least define the tab the configuration will be added to.

@OSGiBeanProperties

This is a Liferay annotation used to define the OSGi component properties used to register a Spring bean as an OSGi component. You'll find this used often in ServiceBuilder modules to expose the Spring beans into the OSGi container. Remember that ServiceBuilder is still Spring (and SpringExtender) based, so this annotation exposes those Spring beans as OSGi components.

When would you use this annotation? If you are using Spring Extender to use Spring within your module and you want to expose the Spring beans into OSGi so other modules can use the beans, you'll want to use this annotation.

I'm leaving a lot of details out of this section because the code for this annotation is extensively javadoced. Check it out: https://github.com/liferay/liferay-portal/blob/master/portal-kernel/src/com/liferay/portal/kernel/spring/osgi/OSGiBeanProperties.java

Conclusion

So that's like all of the annotations I've encountered so far in Liferay 7 CE / Liferay DXP. Hopefully these details will help you in your Liferay development efforts.

Find an annotation I've missed or want some more details on those I've included? Just ask.

Building an Extensible Health Check

Technical Blogs February 1, 2017 By David H Nebinger

Alt Title: Cool things you can do with OSGi

Introduction

So one thing that many organizations like to stand up in their Liferay environments is a "health check".  The goal is to provide a simple URL that monitoring systems can invoke to verify servers are functioning correctly.  The monitoring systems will review the time it takes to render the health check page and examine the contents to compare against known, expected results.  Should the page take too long to render or does not return the expected result, the monitoring system will begin to alert operations staff.

The goal here is to allow operations to be proactive in resolving outage situations rather than being reactive when a client or supervisor calls in to see what is wrong with the site.

Now I'm not going to deliver here a complete working health check system here (sorry in advance if you're disappointed).

What I am going to do is use this as an excuse to show how you can leverage some OSGi stuff to build out Liferay things that you really couldn't have easily done before.

Basically I'm going to build out an extensible health check system which exposes a simple URL and generates a simple HTML table that lists health check sensors and status indicators, the words GREEN, YELLOW and RED for the status of the sensors.  In case it isn't clear, GREEN is healthy, YELLOW means there is non-fatal issues, and RED means something is drastically wrong.

Extensible is the key word in the previous paragraph.  I don't want the piece rendering the HTML to have to know about all of the registered sensors.  As a developer, I want to be able to create new sensors as new systems are integrated into Liferay, etc.  I don't want to have to know about every possible sensor I'm ever going to create and deploy up front, I'll worry about adding new sensors as the need arises.

Defining The Sensor

So our health check system is going to be comprised of various sensors.  Our plan here is to follow the Unix concept of creating small, consise sensors that are each great at taking an individual sensor reading rather than one really big complicated sensor.

So to do this we're going to need to define our sensor interface:

public interface Sensor {
  public static final String STATUS_GREEN = "GREEN";
  public static final String STATUS_RED = "RED";
  public static final String STATUS_YELLOW = "YELLOW";

  /**
   * getRunSortOrder: Returns the order that the sensor should run.  Lower numbers
   * run before higher numbers.  When two sensors have the same run sort order, they
   * are subsequently ordered by name.
   * @return int The run sort order, lower numbers run before higher numbers.
   */
  public int getRunSortOrder();

  /**
   * getName: Returns the name of the sensor.  The name is also displayed in the HTML
   * for the health check report, so using human-readable names is recommended.
   * @return String The sensor display name.
   */
  public String getName();

  /**
   * getStatus: This is the meat of the sensor, this method is called to actually take
   * a sensor reading and return one of the status codes listed above.
   * @return String The sensor status.
   */
  public String getStatus();
}

Pretty simple, huh?  We accommodate the sorting of the sensors for running so we can have control over the test order, we support providing a display name for the HTML output, and we also provide the method for actually getting the sensor status.

That's all we need to get our extensible healthcheck system started.  Now that we have the sensor interface, let's build some real sensors.

Building Sensors

Obviously we are going to be writing classes that implement the Sensor interface.  The fun part for us is that we're going to take advantage of OSGi for all of our sensor registration, bundling, etc.

So the first option we have with the sensors is whether to combine them in one module or build them as separate modules.  The truth is we really don't care.  You can stick with one module or separate modules.  You could mix things up and create multiple modules that each have multiple sensors.  You can include your sensor for your portlet directly in that module to keep it close to what the sensor is testing.  It's entirely up to you.

Our only limitations are that we have a dependency on the Healthcheck API module and our components have to implement the interface and declare themselves with the @Component annotation.

So for our first sensor, let's look at the JVM memory.  Our sensor is going to look at the % of memory used, we'll return GREEN if 60% or less is used, YELLOW if 61-80% and RED if 81% or more is used.  We'll create this guy as a separate module, too.

Our memory sensor class is:

@Component(immediate = true,service = Sensor.class)
public class MemorySensor implements Sensor {
  public static final String NAME = "JVM Memory";

  @Override
  public int getRunSortOrder() {
    // This can run at any time, it's not dependent on others.
    return 5;
  }

  @Override
  public String getName() {
    return NAME;
  }

  @Override
  public String getStatus() {
    // need the percent used
    int pct = getPercentUsed();
    
    // if we are 60% or less, we are green.
    if (pct <= 60) {
      return STATUS_GREEN;
    }
    // if we are 61-80%, we are yellow
    if (pct <= 80) {
      return STATUS_YELLOW;
    }
    
    // if we are above 80%, we are red.
    return STATUS_RED;
  }

  protected double getTotalMemory() {
    double mem = Runtime.getRuntime().totalMemory();

    return mem;
  }

  protected double getFreeMemory() {
    double mem = Runtime.getRuntime().freeMemory();

    return mem;
  }

  protected double getUsedMemory() {
    return getTotalMemory() - getFreeMemory();
  }

  protected int getPercentUsed() {
    double used = getUsedMemory();
    double pct = (used / getTotalMemory()) * 100.0;

    return (int) Math.round(pct);
  }
  
  protected int getPercentAvailable() {
    double pct = (getFreeMemory() / getTotalMemory()) * 100.0;

    return (int) Math.round(pct);
  }
}

Not very fancy.  There are obvious enhancements we could pursue with this.  We could add a configuration instance so we could define the memory thresholds in the control panel rather than using hard coded values.  We could refine the measurement to account for GC.  Whatever.  The point is we have a sensor which is responsible for getting the status and returning the status string.

Now imagine what you can do with these sensors... You can add a sensor for accessing your database(s).  You can check that LDAP is reachable.  If you use external web services, you could call them to ensure they are reachable (even better if they, too, have some sort of health check facility, your health check can incorporate their health check).

Your sensor options are only limited to what you are capable of creating.

I'd recommend keeping the sensors simple and fast, you don't want a long running sensor chewing up time/cpu just to get some idea of server health.

Building The Sensor Manager

The sensor manager is another key part of our extensible healthcheck system.

The sensor manager is going to use a ServiceTracker so it knows all the sensors that are available and gracefully handles the addition and removal of new Sensor components.  Here's the SensorManager:

@Component(immediate = true, service = SensorManager.class)
public class SensorManager {

  /**
   * getHealthStatuses: Returns the map of current health statuses.
   * @return Map map of statuses, key is the sensor name and value is the sensor status.
   */
  public Map<String,String> getHealthStatus() {
    StopWatch totalWatch = null;

    // time the total health check
    if (_log.isDebugEnabled()) {
      totalWatch = new StopWatch();

      totalWatch.start();
    }

    // grab the list of sensors from our service tracker
    List<Sensor> sensors = _serviceTracker.getSortedServices();

    // create a map to hold the sensor status results
    Map<String,String> statuses = new HashMap<>();

    // if we have at least one sensor
    if ((sensors != null) && (! sensors.isEmpty())) {
      String status;
      StopWatch sensorWatch = null;

      // create a stopwatch to time the sensors
      if (_log.isDebugEnabled()) {
        sensorWatch = new StopWatch();
      }

      // for each registered sensor
      for (Sensor sensor : sensors) {
        // reset the stopwatch for the run
        if (_log.isDebugEnabled()) {
          sensorWatch.reset();
          sensorWatch.start();
        }

        // get the status from the sensor
        status = sensor.getStatus();

        // add the sensor and status to the map
        statuses.put(sensor.getName(), status);

        // report sensor run time
        if (_log.isDebugEnabled()) {
          sensorWatch.stop();

          _log.debug("Sensor [" + sensor.getName() + "] run time: " + DurationFormatUtils.formatDurationWords(sensorWatch.getTime(), true, true));
        }
      }
    }

    // report health check run time
    if (_log.isDebugEnabled()) {
      totalWatch.stop();

      _log.debug("Health check run time: " + DurationFormatUtils.formatDurationWords(totalWatch.getTime(), true, true));
    }

    // return the status map
    return statuses;
  }

  @Activate
  protected void activate(BundleContext bundleContext, Map properties) {

    // if we have a current service tracker (likely not), let's close it.
    if (_serviceTracker != null) {
      _serviceTracker.close();
    }

    // create a new sorting service tracker.
    _serviceTracker = new SortingServiceTracker(bundleContext, Sensor.class.getName(), new Comparator<Sensor>() {

      @Override
      public int compare(Sensor o1, Sensor o2) {
        // compare method to sort primarily on run order and secondarily on name.
        if ((o1 == null) && (o2 == null)) return 0;
        if (o1 == null) return -1;
        if (o2 == null) return 1;

        if (o1.getRunSortOrder() != o2.getRunSortOrder()) {
          return o1.getRunSortOrder() - o2.getRunSortOrder();
        }

        return o1.getName().compareTo(o2.getName());
      }
    });
  }

  @Deactivate
  protected void deactivate() {
    if (_serviceTracker != null) {
      _serviceTracker.close();
    }
  }

  private SortingServiceTracker<Sensor> _serviceTracker;
  private static final Log _log = LogFactoryUtil.getLog(SensorManager.class);
}

The SensorManager has the ServiceTracker instance to retrieve the list of registered Sensor services and uses the list to grab each sensor status.  The getHealthStatus() method is the utility method to hide all of the implementation details but expose the ability to grab the map of sensor status details.

Conclusion

Yep, that's right, this is the conclusion.  That's really all there is to see here.

I mean, there is more, you need a portlet to serve up the health status on demand (a serve resource request can work fine here), and just displaying the health status in the portlet view will allow admins to see the health whenever they log into the portal.  And you can add a servlet so external monitoring systems can hit your status page using /o/healthcheck/status (my checked in project supports this).

But yeah, that's not really important with respect to showing cool OSGi stuff.

Ideally this becomes a platform for you to build out an expandable health check system in your own environment.  Pull down the project, start writing your own Sensor implementations and check out the results.

If you build some cool sensors you want to share, send me a PR and I'll add them to the project.

In fact, let's consider this to be like a community project.  If you use it and find issues, feel free to submit PRs with fixes.  If you build some Sensors, submit a PR with them.  If you come up with a cool enhancement, send a PR.  I'll do some minimal verification and merge everything in.

Here's the github project link to get you started: https://github.com/dnebing/healthcheck

Alt Conclusion

Just like there's an alternate title, there's an alternate conclusion.

The alternate conclusion here is that there's some really cool things you can do when you embrace OSGi in Liferay, pretty much the way Liferay has embraced OSGi.

OSGi offers a way to build expandable systems that are very decoupled.  If you need this kind of expansion, focus on separating your API from your implementations, then use a ServiceTracker to access all available instances.

Liferay uses this kind of thing extensively.  The product menu is extensible this way, the My Account pages are extensible in this way, heck even the LiferayMVC portlet implementations using MVCActionCommand and MVCResourceCommand interfaces rely on the power of OSGi to handle the dynamic services.

LiferayMVC is actually an interesting example; there, instead of managing a service tracker list, they manage a service tracker map where the key is the MVC command.  So the LiferayMVC portlet uses the incoming MVC command to get the service instance based on the command and passes the control to it for processing.  This makes the portlet more extensible because anyone can add a new command or override an existing command (using service ranking) and the original portlet module doesn't need to be touched at all.

Where can you find examples of things you can do leveraging OSGi concepts?  The Liferay source, of course.  Liferay eats it's own dog food, and they do a lot more with OSGi than I've ever needed to.  If you have some idea of a thing to do that benefits from an OSGi implementation but need an example of how to do it, find something in the Liferay source that has a similar implementation and see how they did it.

Liferay 7 Notifications

Technical Blogs January 18, 2017 By David H Nebinger

So in a recent project I've been building I reached a point where I believed my project would benefit from being able to issue user notifications.

For those that are not aware, Liferay has a built-in system for subscribing and notifications.  Using these APIs, you can quickly add notifications to your projects.

Foundation

Before diving into the implementation, let's talk about the foundation of the subscription and notification APIs.

The first thing you need to identify is your events.  Events are what users will subscribe to, and when events occur you want to issue notifications.  Knowing your list of events going in will make things easier as you'll know when you'll need to send notifications.

For this blog post we're going to be implementing an example, so I'm going to build a system to issue a notification when a user with the Administrator role logs in.  We all know that for effective site security no one should be using Administrator credentials because of the unlimited access Administrators have, so getting a notification when an Administrator logs in can be a good security test.

So we know the event, "Administrator has logged in", the rest of the blog will build out support for subscriptions and notifications.

One thing we will need to pull all of this together is a portlet module (since we have some interface requirements).  Let's start by building a new portlet module.  You can use your IDE, but to be IDE-neutral I'm going to stick with blade commands:

blade create -t mvcportlet -p com.dnebinger.admin.notification admin-notification

This will give us a new Liferay MVC portlet using our desired package and a simple project directory.

Subscribing To The Event

The first thing that users need to be able to do is to subscribe to your events.  Some portal examples are blogs (where a user can subscribe to all blogs or an individual blog for changes).  So the challenge for you is to identify where a user would subscribe to your event.  In some cases you would display right on the page (i.e. blogs and forum threads), in other cases you might want to move it to a configuration panel.

For this portlet, there is no real UI work, just going to have a JSP with a checkbox for subscribe/unsubscribe, but the important part is the Liferay subscription APIs we're going to use.

Subscription is handled by the com.liferay.portal.kernel.service.SubscriptionLocalService service.  When you're subscribing, you'll be using the addSubscription() method, and when you're unsubscribing you're going to use the deleteSubscription() method.

Arguments for the calls are:

  • userId: The user who is subscribing/unsubscribing.
  • groupId: (for adds) the group the user is subscribing to (for 'containers' like a doc lib folder or forum category).
  • className: The name of the class user wants to be monitored of changes on.
  • pkId: The primary key for the object to (un)subscribe to.

So normally you'll be building subscription into your SB entities, so it is common practice to put subscribe and unsubscribe methods into the service interface to combine the subscription features with the data access.

For this project, we don't really have an entity or a data access layer, so we're just going to handle subscribe/unsubscribe directly in our action command handler.  Also we don't really have a PK id so we'll just stick with an id of 0 and the portlet class as the class name.

@Component(
  immediate = true,
  property = {
    "javax.portlet.name=" + AdminNotificationPortletKeys.ADMIN_NOTIFICATION_PORTLET_KEY,
    "mvc.command.name=/update_subscription"
  },
  service = MVCActionCommand.class
)
public class SubscribeMVCActionCommand extends BaseMVCActionCommand {
  @Override
  protected void doProcessAction(ActionRequest actionRequest, ActionResponse actionResponse) throws Exception {
    String cmd = ParamUtil.getString(actionRequest, Constants.CMD);

    if (Validator.isNull(cmd)) {
      // an error
    }

    long userId = PortalUtil.getUserId(actionRequest);

    if (Constants.SUBSCRIBE.equals(cmd)) {
      _subscriptionLocalService.addSubscription(userId, 0, AdminNotificationPortlet.class.getName(), 0);
    } else if (Constants.UNSUBSCRIBE.equals(cmd)) {
      _subscriptionLocalService.deleteSubscription(userId, AdminNotificationPortlet.class.getName(), 0);
    }
  }

  @Reference(unbind = "-")
  protected void setSubscriptionLocalService(final SubscriptionLocalService subscriptionLocalService) {
    _subscriptionLocalService = subscriptionLocalService;
  }

  private SubscriptionLocalService _subscriptionLocalService;
}

User Notification Preferences

Users can manage their notification preferences separately from portlets which issue notifications.  When you go to the "My Account" area of the side bar, there's a "Notifications" option.  When you click this link, you will normally see your list of notifications.  But if you click on the dot menu in the upper right corner, you can choose "Configuration" to see all of the magic.

This page has a collapsable area for each registered notifying portlet and within each area is a line item for a type of notification and sliders to recieve notifications by email and/or website (assuming the portlet has indicated that it supports both types of notifications).  In the future Liferay or your team might add more notification methods (i.e. SMS or other), and for those cases the portlet just needs to indicate that it also supports the notification type.

So how do we get our collapsable panel registered to appear on this page?  Well, through the magic of OSGi DS service annotations, of course!

There are two types of classes that we need to implement.  The first extends the com.liferay.portal.kernel.notifications.UserNotificationDefinition class.  As the class name suggests, this class provides the definition of the type of notifications the portlet can send and the notification types it supports.

@Component(
  immediate = true,
  property = {"javax.portlet.name=" + AdminNotificationPortletKeys.ADMIN_NOTIFICATION},
  service = UserNotificationDefinition.class
)
public class AdminLoginUserNotificationDefinition extends UserNotificationDefinition {
  public AdminLoginUserNotificationDefinition() {
    // pass in our portlet key, 0 for a class name id (don't care about it), the notification type (not really), and
    // finally the resource bundle key for the message the user sees.
    super(AdminNotificationPortletKeys.ADMIN_NOTIFICATION, 0,
      AdminNotificationType.NOTIFICATION_TYPE_ADMINISTRATOR_LOGIN,
      "receive-a-notification-when-an-admin-logs-in");

    // add a notification type for each sort of notification that we want to support.
    addUserNotificationDeliveryType(
      new UserNotificationDeliveryType(
        "email", UserNotificationDeliveryConstants.TYPE_EMAIL, true, true));
    addUserNotificationDeliveryType(
      new UserNotificationDeliveryType(
        "website", UserNotificationDeliveryConstants.TYPE_WEBSITE, true, true));
  }
}

This definition registers and informs notifications that we have a notification definition.  The constructor binds the notification to our custom portlet and provides the message key for the notification panel.  We also add two user notification delivery types that we'll support, one for sending an email and one for using the notification portlet.

When you go to the Notifications panel under My Account and choose the Configuration option from the menu in the dropdown on the upper-right corner, you can see the notification preferences:

Notification Type Selection

Handling Notifications

Another aspect of notifications is the UserNotificationHandler implementation.  The UserNotificationHandler's job is to interpret the notification event and determine whether to deliver the notification and build the UserNotificationFeedEntry (basically the notification message itself).

Liferay provides a number of base implementation classes that you can use to build your own UserNotificationHandler instance from:

  • com.liferay.portal.kernel.notifications.BaseUserNotificationHandler - This implements a simple user notification handler with points to override the body of the notification and some other key points, but for the most part it is capable of building all of the basic notification details.
  • com.liferay.portal.kernel.notifications.BaseModelUserNotificationHandler - This class is another base class suitable for asset-enabled entities for notifications.  It uses the AssetRenderer for the entity class to render the asset and this is used as the message for the notifcation.

Obviously if you have an asset-enabled entity you're notifying on, you'd want to use the BaseModelUserNotificationHandler.  For our implementation we're going to use BaseUserNotificationHandler as the base class:

@Component(
  immediate = true,
  property = {"javax.portlet.name=" + AdminNotificationPortletKeys.ADMIN_NOTIFICATION},
  service = UserNotificationHandler.class
)
public class AdminLoginUserNotificationHandler extends BaseUserNotificationHandler {

  /**
   * AdminLoginUserNotificationHandler: Constructor class.
   */
  public AdminLoginUserNotificationHandler() {
    setPortletId(AdminNotificationPortletKeys.ADMIN_NOTIFICATION);
  }

  @Override
  protected String getBody(UserNotificationEvent userNotificationEvent, ServiceContext serviceContext) throws Exception {
    String username = LanguageUtil.get(serviceContext.getLocale(), _UKNOWN_USER_KEY);
    
    // okay, we need to get the user for the event
    User user = _userLocalService.fetchUser(userNotificationEvent.getUserId());

    if (Validator.isNotNull(user)) {
      // get the company the user belongs to.
      Company company = _companyLocalService.fetchCompany(user.getCompanyId());

      // based on the company auth type, find the user name to display.
      // so we'll get screen name or email address or whatever they're using to log in.

      if (Validator.isNotNull(company)) {
        if (company.getAuthType().equals(CompanyConstants.AUTH_TYPE_EA)) {
          username = user.getEmailAddress();
        } else if (company.getAuthType().equals(CompanyConstants.AUTH_TYPE_SN)) {
          username = user.getScreenName();
        } else if (company.getAuthType().equals(CompanyConstants.AUTH_TYPE_ID)) {
          username = String.valueOf(user.getUserId());
        }
      }
    }

    // we'll be stashing the client address in the payload of the event, so let's extract it here.
    JSONObject jsonObject = JSONFactoryUtil.createJSONObject(
      userNotificationEvent.getPayload());

    String fromHost = jsonObject.getString(Constants.FROM_HOST);

    // fetch our strings via the language bundle.
    String title = LanguageUtil.get(serviceContext.getLocale(), _TITLE_KEY);

    String body = LanguageUtil.format(serviceContext.getLocale(), _BODY_KEY, new Object[] {username, fromHost});

    // build the html using our template.
    String html = StringUtil.replace(_BODY_TEMPLATE, _BODY_REPLACEMENTS, new String[] {title, body});

    return html;
  }

  @Reference(unbind = "-")
  protected void setUserLocalService(final UserLocalService userLocalService) {
    _userLocalService = userLocalService;
  }
  @Reference(unbind = "-")
  protected void setCompanyLocalService(final CompanyLocalService companyLocalService) {
    _companyLocalService = companyLocalService;
  }

  private UserLocalService _userLocalService;
  private CompanyLocalService _companyLocalService;

  private static final String _TITLE_KEY = "title.admin.login";
  private static final String _BODY_KEY = "body.admin.login";
  private static final String _UKNOWN_USER_KEY = "unknown.user";

  private static final String _BODY_TEMPLATE = "<div class=\"title\">[$TITLE$]</div><div class=\"body\">[$BODY$]</div>";
  private static final String[] _BODY_REPLACEMENTS = new String[] {"[$TITLE$]", "[$BODY$]"};

  private static final Log _log = LogFactoryUtil.getLog(AdminLoginUserNotificationHandler.class);
}

This handler basically builds the body of the notification itself which will be based on the admin login details (user and where they are logging in from).  When building out your own notification, you'll likely want to be using the notification event payload to pass details.  We're going to pass just the host where the admin is coming from so our payload is as simple as it gets, but you could easily pass XML or JSON or whatever structured string you want to pass necessary notification details.

This handler just makes the same notification body for both email and notification portlet display, no difference between the two.  Since the method is passed the UserNotificationEvent, you can use the getDeliveryType() method to build different bodies depending upon whether you are building an email notification or a notification portlet display message.

Publishing Notification Events

So far we have code to allow users to subscribe to our administrator login event, we allow them to choose how they want to receive the notifications, and we also have code to transform the notification event into a notification message, what remains is actually issuing the notification events themselves.

This is very much going to be dependent upon your event source.  Most Liferay events are based on the addition or modification of some entity, so it is common to find their event publishing code in the service implementation classes when the entities are added or updated.  Your own notification events can come from wherever the event originates, even outside of the service layer.

Our notification event is based on an administrator login; the best way to publish these kinds of events is through a post login component.  We'll define the new component as such:

@Component(
  immediate = true, property = {"key=login.events.post"},
  service = LifecycleAction.class
)
public class AdminLoginNotificationEventSender implements LifecycleAction {

  @Override
  public void processLifecycleEvent(LifecycleEvent lifecycleEvent)
      throws ActionException {

    // get the request associated with the event
    HttpServletRequest request = lifecycleEvent.getRequest();

    // get the user associated with the event
    User user = null;

    try {
      user = PortalUtil.getUser(request);
    } catch (PortalException e) {
      // failed to get the user, just ignore this
    }

    if (user == null) {
      // failed to get a valid user, just return.
      return;
    }

    // We have the user, but are they an admin?
    PermissionChecker permissionChecker = null;

    try {
      permissionChecker = PermissionCheckerFactoryUtil.create(user);
    } catch (Exception e) {
      // ignore the exception
    }

    if (permissionChecker == null) {
      // failed to get a permission checker
      return;
    }

    // If the permission checker indicates the user is not omniadmin, nothing to report.
    if (! permissionChecker.isOmniadmin()) {
      return;
    }

    // this user is an administrator, need to issue the event
    ServiceContext serviceContext = null;

    try {
      // create a service context for the call
      serviceContext = ServiceContextFactory.getInstance(request);

      // note that when you're behind an LB, the remote host may be the address
      // for the LB instead of the remote client.  In these cases the LB will often
      // add a request header with a special key that holds the remote client host
      // so you'd want to use that if it is available.
      String fromHost = request.getRemoteHost();

      // notify subscribers
      notifySubscribers(user.getUserId(), fromHost, user.getCompanyId(), serviceContext);
    } catch (PortalException e) {
      // ignored
    }
  }

  protected void notifySubscribers(long userId, String fromHost, long companyId, ServiceContext serviceContext)
      throws PortalException {

    // so all of this stuff should normally come from some kind of configuration.
    // As this is just an example, we're using a lot of hard coded values and portal-ext.properties values.
    
    String entryTitle = "Admin User Login";

    String fromName = PropsUtil.get(Constants.EMAIL_FROM_NAME);
    String fromAddress = GetterUtil.getString(PropsUtil.get(Constants.EMAIL_FROM_ADDRESS), PropsUtil.get(PropsKeys.ADMIN_EMAIL_FROM_ADDRESS));

    LocalizedValuesMap subjectLocalizedValuesMap = new LocalizedValuesMap();
    LocalizedValuesMap bodyLocalizedValuesMap = new LocalizedValuesMap();

    subjectLocalizedValuesMap.put(Locale.ENGLISH, "Administrator Login");
    bodyLocalizedValuesMap.put(Locale.ENGLISH, "Adminstrator has logged in.");

    AdminLoginSubscriptionSender subscriptionSender =
        new AdminLoginSubscriptionSender();

    subscriptionSender.setFromHost(fromHost);

    subscriptionSender.setClassPK(0);
    subscriptionSender.setClassName(AdminNotificationPortlet.class.getName());
    subscriptionSender.setCompanyId(companyId);

    subscriptionSender.setCurrentUserId(userId);
    subscriptionSender.setEntryTitle(entryTitle);
    subscriptionSender.setFrom(fromAddress, fromName);
    subscriptionSender.setHtmlFormat(true);

    int notificationType = AdminNotificationType.NOTIFICATION_TYPE_ADMINISTRATOR_LOGIN;

    subscriptionSender.setNotificationType(notificationType);

    String portletId = PortletProviderUtil.getPortletId(AdminNotificationPortletKeys.ADMIN_NOTIFICATION, PortletProvider.Action.VIEW);

    subscriptionSender.setPortletId(portletId);

    subscriptionSender.setReplyToAddress(fromAddress);
    subscriptionSender.setServiceContext(serviceContext);

    subscriptionSender.addPersistedSubscribers(
        AdminNotificationPortlet.class.getName(), 0);

    subscriptionSender.flushNotificationsAsync();
  }
}

This is a LifecycleAction component that registers as a post login lifecycle event listener.  It goes through a series of checks to determine if the user is an administrator and, when they are, it issues a notification.  The real fun happens in the notifySubscribers() method.

This method has a lot of initialization and setting of the subscriptionSender properties.  This variable is of type AdminLoginSubscriptionSender, a class which extends SubscriptionSender.  This is the guy that handles the actual notification sending.

The flushNotificationAsync() method pushes the instance onto the Liferay Message Bus where a message receiver gets the SubscriptionSender and invokes its flushNotification() method (you can call this method too if you don't need async notification sending).

The flushNotification() method does some permission checking, user verification, filtering (i.e. don't send a notification to the user that generated the event) and eventually sends the email notification and/or adds the user notification for the notifications portlet.

The AdminLoginSubscriptionSender class is pretty simple:

public class AdminLoginSubscriptionSender extends SubscriptionSender {

  private static final long serialVersionUID = -7152698157653361441L;

  protected void populateNotificationEventJSONObject(
      JSONObject notificationEventJSONObject) {

    super.populateNotificationEventJSONObject(notificationEventJSONObject);

    notificationEventJSONObject.put(Constants.FROM_HOST, _fromHost);
  }

  @Override
  protected boolean hasPermission(Subscription subscription, String className, long classPK, User user) throws Exception {
    return true;
  }

  @Override
  protected boolean hasPermission(Subscription subscription, User user) throws Exception {
    return true;
  }

  @Override
  protected void sendNotification(User user) throws Exception {
    // remove the super classes filtering of not notifying user who is self.
    // makes sense in most cases, but we want a notification of admin login so
    // we know when never any admin logs in from anywhere at any time.

    // will be a pain if we get notified because of our own login, but we want to
    // know if some hacker gets our admin credentials and logs in and it's not really us.

    sendEmailNotification(user);
    sendUserNotification(user);
  }

  public void setFromHost(String fromHost) {
    this._fromHost = fromHost;
  }

  private String _fromHost;
}

Putting It All Together

Okay, so now we have everything:

  • We have code to allow users to subscribe to the admin login events.
  • We have code to allow users to select how they receive notifications.
  • We have code to transform the notification message in the database into HTML for display in the notifications portlet.
  • We have code to send the notifications when an administrator logs in.

Now we can test it all out.  After building and deploying the module, you're pretty much ready to go.

You'll have to log in and sign up to receive the notifications.  Note if you're using your local test Liferay environment you might not have email enabled so be sure to use the web notifications.  In fact, in my DXP environment I don't have email configured and I got a slew of exceptions from the Liferay email subsystem; I ignored them since they are from not having email set up.

Then log out and log in again as an administrator and you should see your notification pop up in the left sidebar.

Admin Notifications Messages

Conclusion

So there you have it, basic code that will support creating and sending notifications.  You can use this code to add notifications support into your own portlets and take them whereever you need.

You can find the code for this project up on github: https://github.com/dnebing/admin-notification

Enjoy!

Removing Panels from My Account

Technical Blogs December 29, 2016 By David H Nebinger

So recently I was asked, "How can panels be removed from the My Account portlet?"

It seems like such a deceptively simple question since it used to be a supported feature, but my response to the question was that it is just not possible anymore.

Back in the 6.x days, all you needed to do was override the users.form.my.account.main, identification and/or miscellaneous properties in portal-ext.properties and you could take out any of the panels you didn't want the users to see in the my account panel.

The problem with the old way is that while it was easy to remove panels or reorganize panels, it was extremely difficult to add new custom panels to My Account.

With Liferay 7 and DXP, things have swung the other way.  With OSGi, it is extremely easy to add new panels to My Account.  Just create a new component that implements the FormNavigatorEntry interface and deploy it and Liferay will happily present your custom panel in the My Account.  See https://dev.liferay.com/develop/tutorials/-/knowledge_base/7-0/form-navigator for information how to do this.

Although it is easy to add new panels, there is no way to remove panels.  So I set out to find out if there was a way to restore this functionality.

Background

In Liferay 7, the new Form Navigator components replaces the old portal properties setup.  But while there used to be three separate sets of properties (one for adds, one for updates and one for My Account), they have all been merged into a single set.  So even if there was a supported way to disable a panel, this would disable the panel from not only My Account but also from the users control panel, not something that most sites want.

So this problem actually points to the solution - in order to control the My Account panels separately, we need a completely separate Form Navigator setup.  If we had a separate setup, we'd also need to have a JSP fragment bundle override to use the custom Form Navigator rather than the original.

Creating the My Account Form Navigator

This is actually a simple task because the Liferay code is already deployed as an OSGi bundle, but more importantly the current classes are declared in an export package in the module's BND file.  This means that our classes can extend the Liferay classes without dependency issues.

Here's one of the new form navigator entries that I came up with:

@Component(
	immediate = true,
	property = {"form.navigator.entry.order:Integer=70"},
	service = FormNavigatorEntry.class
)
public class UserPasswordFormNavigatorEntryExt extends UserPasswordFormNavigatorEntry {

	private boolean visible = GetterUtil.getBoolean(PropsUtil.get(
		Constants.MY_ACCOUNT_PASSWORD_VISIBLE), true);

	@Override
	public String getFormNavigatorId() {
		return Constants.MY_ACCOUNT_PREFIX + super.getFormNavigatorId();
	}

	@Override
	public boolean isVisible(User user, User selUser) {
		return visible && super.isVisible(user, selUser);
	}
}

So first off our navigator entry extends the original entry, and that saves us a lot of custom coding.  We're using a custom portal property to disable the panel, so we're fetching that up front.  We're using the property setting along with the super class' method to determine if the panel should be visible.  This will allow us to set a portal-ext.property value to disable the panel and it will just get excluded.

The most important part is the override for the form navigator ID.  We're declaring a custom ID that will separate how the new entries and categories will be made available.

All of the other form navigator entries will follow the same pattern, they'll extend the original class, use a custom property to allow for disabling, and they'll use the special category prefix for the entries.

We'll do the similar with the category overrides:

@Component(
	property = {"form.navigator.category.order:Integer=30"},
	service = FormNavigatorCategory.class
)
public class UserUserInformationFormNavigatorCategoryExt extends UserUserInformationFormNavigatorCategory {
	@Override
	public String getFormNavigatorId() {
		return Constants.MY_ACCOUNT_PREFIX + super.getFormNavigatorId();
	}
}

For the three component categories we'll return our new form navigator ID string.

Creating the My Account JSP Fragment Bundle

Now that we have a custom form navigator, we'll need a JSP override on My Account to use the new version.

Our fragment bundle is based off the the My Account portlet, so we can use the following blade command:

blade create -t fragment -h com.liferay.my.account.web -H 1.0.4 users-admin-web-my-account

If you're using an IDE, just use the equivalent to create a fragment module from the My Account web module for the version currently used in your portal (check the $LIFERAY_HOME/work folder for the com.liferay.my.account.web folder as it includes the version number in your portal).

To use our new form navigator, we need to override the edit_user.jsp page.  This page actually comes from the modules/apps/foundation/users-admin/users-admin-web module from the Liferay source (during build time it is copied into the My Account module).  Copy the file from the source to your new module as this will be the foundation for the change.

I made two changes to the file.  The first change adds some java code to use a variable for the form navigator id to use, the second change was to the <liferay-ui:form-navigator /> tag declaration to use the variable instead of a constant.

Here's the java code I added:

// initialize to the real form navigator id.
String formNavigatorId = FormNavigatorConstants.FORM_NAVIGATOR_ID_USERS;

// if this is the "My Account" portlet...
if (portletName.equals(myAccountPortletId)) {
	// include the special prefix
	formNavigatorId = "my.account." + formNavigatorId;
}

We start with the current constant value for the navigator id.  Then if the portlet id matches the My Account portlet, we'll add our form navigator ID prefix to the variable.

Here's the change to the form navigator tag:

<liferay-ui:form-navigator 
	backurl="<%= backURL %>" 
	formmodelbean="<%= selUser %>" 
	id="<%= formNavigatorId %>" 
	markupview="lexicon">
</liferay-ui:form-navigator>

Build and deploy all of your modules to begin testing.

Testing

Testing is actually pretty easy to do.  Start by adding some portal-ext.properties file entries in order to disable some of the panels:

my.account.addresses.visible=false
my.account.additional.email.visible=false
my.account.instant.messenger.visible=false
my.account.open.id.visible=false
my.account.phone.numbers.visible=false
my.account.sms.visible=false
my.account.social.network.visible=false
my.account.websites.visible=false

If your portal is running, you'll need to restart the portal for the properties to take effect.

Start by logging in and going to the Users control panel and choose to edit a user.  Don't worry, we're not editing, we're just looking to see that all of the panels are there:

User Control Panel View

Next, navigate to My Account -> Account Settings to see if the changes have been applied:

My Account Panel

Here we can see that the whole Identification section has been removed since all of the panels have been disabled.  We can also see that password and personal site template sections have been removed from the view.

Conclusion

So this project has been loaded to GitHub: https://github.com/dnebing/my-account-override  Feel free to clone and use as you see fit.

It uses properties in portal-ext.properties to disable specific panels from the My Account page while leaving the user control panel unchanged.

There is one caveat to remember though.

When we created the fragment, we had to use the version number currently deployed in the portal.  This means that any upgrade to the portal, to a new GA or a new SP or FP or even hot fix may change the version number of the My Account portlet.  This will force you to come back to your fragment bundle and change the version in the BND file.  In fact, my recommendation here would be to match your bundle version to the fragment host bundle version as it will be easier to identify whether you have the right version(s) deployed.

Otherwise you should be good to go!

Update: You can eliminate the above caveat by using a version range in your bnd file.  I changed the line in bnd.bnd file to be:

Fragment-Host: com.liferay.my.account.web;bundle-version="[1.0.4,2.0.0)"

This allows your fragment bundle to apply to any newer version of the module that gets deployed up to 2.0.0.  Note, however, that you should still compare each release to ensure you're not losing any fixes or improvements that have been shipped as an update.

Extending Liferay OSGi Modules

Technical Blogs December 20, 2016 By David H Nebinger

Recently I was working on a fragment bundle for a JSP override to the message boards and I wanted to wrap the changes so they could be disabled by a configuration property.

But the configuration is managed by a Java interface and set via the OSGi Configuration Admin service in a completely different module jar contained in an LPKG file in the osgi directory.

So I wondered if there was a way to weave in a change to the the Java interface to include my configuration item in a concept similar to the plugin extending a plugin technique.

And in fact there is and I'm going to share it with you here...

Creating The Module

So first we're going to be building a gradle module in a Liferay workspace, so you're going to need one of those.

In our modules directory we're going to create a new folder named message-boards-api-ext for containing our new module.  Actually the name of the folder doesn't matter too much so feel free to follow your own naming standards.

We need a gradle build file, and since we're in a Liferay workspace, our build.gradle file is pretty simple:

dependencies {
    compileOnly group: "biz.aQute.bnd", name: "biz.aQute.bndlib", version: "3.1.0"
    compileOnly group: "com.liferay", name: "com.liferay.portal.configuration.metatype", version: "2.0.0"
    compileOnly group: "com.liferay.portal", name: "com.liferay.portal.kernel", version: "2.0.0"
    compileOnly group: "javax.servlet", name: "javax.servlet-api", version: "3.0.1"
    compileOnly group: "org.osgi", name: "org.osgi.core", version: "5.0.0"

    compile group: "com.liferay", name: "com.liferay.message.boards.api", version: "3.1.0"
}

jar.archiveName = 'com.liferay.message.boards.api.jar'

The dependencies mostly come from the build.gradle file from the module from the Liferay source found here: https://github.com/liferay/liferay-portal/blob/master/modules/apps/collaboration/message-boards/message-boards-api/build.gradle

We did add as a compile option the module that we're building a replacement for, in this case the com.liferay.message.boards.api module.

Also we are specifying the archive name that we are building that excludes the version number.  We're specifying the archive name so it matches the specifications from the Liferay override documentation: https://github.com/liferay/liferay-portal/blob/master/tools/osgi-marketplace-override-README.markdown

We also need a bnd.bnd file to build our module:

Bundle-Name: Liferay Message Boards API
Bundle-SymbolicName: com.liferay.message.boards.api
Bundle-Version: 3.1.0
Export-Package:\
  com.liferay.message.boards.configuration,\
  com.liferay.message.boards.display.context,\
  com.liferay.message.boards.util.comparator
Liferay-Releng-Module-Group-Description:
Liferay-Releng-Module-Group-Title: Message Boards

Include-Resource: @com.liferay.message.boards.api-3.1.0.jar

The bulk of the file is going to come directly from the original: https://github.com/liferay/liferay-portal/blob/master/modules/apps/collaboration/message-boards/message-boards-api/bnd.bnd.

The only addition to the file is the Include-Resource BND declaration.  As I previously covered in my blog post about OSGi Module Dependencies, this is the declaration used to create an Uber Module.  But for our purposes, this actually provides the binary source for the bulk of the content of our module.

By building an Uber Module from the source module, we are basically going to be building a jar file from the exploded original module jar, allowing us to have the baseline jar with all of the original content.

Finally we need our source override file, in this case we need the src/main/java/com/liferay/message/boards/configuration/MBConfiguration.java file:

package com.liferay.message.boards.configuration;

import aQute.bnd.annotation.metatype.Meta;

import com.liferay.portal.configuration.metatype.annotations.ExtendedObjectClassDefinition;

/**
 * @author Sergio González
 * @author dnebinger
 */
@ExtendedObjectClassDefinition(category = "collaboration")
@Meta.OCD(
	id = "com.liferay.message.boards.configuration.MBConfiguration",
	localization = "content/Language", name = "mb.configuration.name"
)
public interface MBConfiguration {

	/**
	 * Enter time in minutes on how often this job is run. If a user's ban is
	 * set to expire at 12:05 PM and the job runs at 2 PM, the expire will occur
	 * during the 2 PM run.
	 */
	@Meta.AD(deflt = "120", required = false)
	public int expireBanJobInterval();

	/**
	 * Flag that determines if the override should be applied.
	 */
	@Meta.AD(deflt = "false", required = false)
	public boolean applyOverride();
}

So the bulk of the code comes from the original: https://github.com/liferay/liferay-portal/blob/master/modules/apps/collaboration/message-boards/message-boards-api/src/main/java/com/liferay/message/boards/configuration/MBConfiguration.java

Our addition is the new flag value.

Now if we had other modifications for other classes, we would make sure we had the same paths, same packages, same class names, we would just have our changes in on top of the originals.

We could even introduce new packages and classes for our custom code.

Building The Module

Building is pretty easy, we just use the gradle wrapper to do the build:

$ ../../gradlew build

When we look inside of our built module jar, this is where we can see that our change did, in fact, get woven into the new module jar:

Exploded Module

We can see from the highlighted line from the compiled class that our jar definitely contains our method, so our build is good.  We can also see the original packages and resources that we get from the Uber Module approach, so our module is definitely complete.

Deploying The Module

Okay, so this is the ugly part of this whole thing, deployments are not easy.

Here's the restrictions that we have to keep in mind:

  1. The portal cannot be running when we do the deployment.
  2. The built jar file must be copied manually to the $LIFERAY_HOME/osgi/marketplace/override folder.
  3. The $LIFERAY_HOME/osgi/state folder must be deleted.
  4. If you have changed any web-type file (javascript, JSP, css, etc.) you should delete the relevant folder from the $LIFERAY_HOME/work folder.
  5. If Liferay deployes a newer version than the one declared in our bundle override, our changes may not be applied.
  6. Only one override bundle can work at a time; if someone else has an override bundle in this folder, your change will step on theirs and this may not be a option (Liferay may distribute updates or hot fixes as module overrides in this fashion).
  7. Support for $LIFERAY_HOME/osgi/marketplace/override was added in later LR7CE/DXP releases, so check that the version you are using has this support.
  8. You can break your portal if your module override does bad things or has bugs.

Wow, that is a lot of restrictions.

Basically we need to copy our jar manually to the $LIFERAY_HOME/osgi/marketplace/override folder, but we cannot do it if the application server is running.  The $LIFERAY_HOME/osgi/state folder should be whacked as we are changing the content of the folder.  And the bundle folder in $LIFERAY_HOME/work may need to be deleted so any cached resources are properly cleaned out.

For this change, we copy the com.liferay.message.boards.api.jar file to $LIFERAY_HOME/osig/marketplace/override folder and delete the state folder when the application server is down, then we can fire up the application server to see the outcome.

Conclusion

We can verify the change works by navigating to the Control Panel -> System Settings -> Collaboration -> Message Boards page and viewing the change:

Deployed Module Override

Be forewarned, however!  Listen when I emphasize the following:

Pay special attention to the restrictions on this technique!  This is not a "build, deploy and forget it" technique as each Liferay upgrade, fix pack or hot fix can easily invalidate your change, and every deployment requires special handling.
You can seriously break Liferay if you deliver bad code in your module override, so test the heck out of these overrides.
This should not be used in lieu of supported Liferay methods to extend or replace functionality (JSP fragment bundles, MVC command service overrides, etc.), this is just intended for edge cases where no other override/extension option is supported.

In other words:

 

Using a Custom Bundle for the Liferay Workspace

Technical Blogs December 6, 2016 By David H Nebinger

So the Liferay workspace is pretty handy when it comes to building all of your OSGi modules, themes, layout templates and yes, even your legacy code from the plugins SDK.

But, when it comes to initializing a local bundle for deployment or building a dist bundle, using one of the canned Liferay 7 bundles from sourceforge may not cut it, especially if you're using Liferay DXP, or if you're behind a strict proxy or you just want a custom bundle with fixes already streamed in.

Looking at the gradle.properties file in the workspace it seems as though you can use a custom URL will work and, in fact, it does.  You can point to your custom URL and download your custom bundle.

But if you don't have a local HTTP server to host your bundle from, you can simulate having a custom bundle downloaded from a site w/o actually doing a download.

Building the Custom Bundle

So first you need to build a custom bundle.  This is actually quite easy.  It is simply a matter of downloading a current bundle, unzipping it, making whatever changes you want, then re-zipping it back up.

So you might, for example, download the DXP Service Pack 1 bundle, expand it, apply Fix Pack 8, then zip it all back up.  This will give you a custom bundle with the latest (currently) fix pack ready for your development environment.

One key here, though, is that the bundle must have a root directory in the zip.  Basically this means you shouldn't see, for example, /tomcat-8.0.32 as a folder in the root directory of the zip, it should always have some directory that contains everything, such as /dxp-sp1-fp8/tomcat-8.0.32, etc.

If you try to use your custom bundle and you get weird unzip errors like corrupt streams and such, that likely points to a missing directory at the root level w/ the bundle nestled underneath it.

Copy the Bundle to the Cache Folder

As it turns out, Liferay actually caches downloaded bundles in the ~/.liferay/bundles directory.

Copy your custom bundle zip to this directory to get it in the download cache.

Update the Download URL

Next you have to update your gradle.properties file to use your new URL.

You can use any URL you want, just make sure it ends with your bundle name:

liferay.workspace.bundle.url=http://example.com/dxp-sp1-fp8.zip

Modify the Build Download Instructions

The final step is to modify the build.gradle file in the workspace root.  Add the following lines:

downloadBundle {
    onlyIfNewer false
    overwrite false
}

This basically has the download bundle skip the network check to see if the remote URL file exists and is newer than the local cached file.

Conclusion

Hey, that's it!  You're now ready to go with your local bundle.  Issue the "gradlew distBundleZip" command to build all of your modules, extract your custom bundle, feather in your deployment artifacts, and zip it all up into a ready-to-deploy  Liferay bundle.

Great for parties, entertaining, and even building a container-based deployment artifact for a cloud-based Liferay environment.

 

Debugging Liferay 7 In Intellij

Technical Blogs September 15, 2016 By David H Nebinger

Introduction

So I'm doing more and more development using pure Intellij for Liferay 7 / DXP, even debugging.

I thought I'd share how I do it in case someone else is looking for a brief how-to.

Tomcat Setup

So I do not like running Tomcat within the IDE, it just feels wrong.  I'd rather have it run as a separate JVM with it's own memory settings, etc.  Besides I use external Tomcats to test deployments, run demos, etc., so I use them for development also.  The downside to this approach is that there is zero support for hot deploy; if you change code you have to do a build and deploy it for debugging to work.

Configuring Tomcat for debugging is really easy, but a quick script copy will make it even easier.

In your tomcat-8.0.32/bin directory, copy the startup script as debug.  On Windows that means copying startup.bat as debug.bat, on all others you copy startup.sh as debug.sh.

Edit your new debug script with your favorite text editor and, practically the last line of the file you'll find the EXECUTABLE start line (will vary based upon your platform).

Right before the "start", insert the word "jpda" and save the file.

For Windows your line will read:

call "%EXECUTABLE%" jpda start %CMD_LINE_ARGS%

For all others your line will read:

exec "$PRGDIR"/"$EXECUTABLE" jpda start "$@"

This gives you a great little startup script that enables remote debugging.

Use your debug script to launch Tomcat.

Intellij Setup

We now need to set up a remote debugging configuration.  From the Run menu, choose Edit Configurations... option to open the Run/Debug Configurations dialog.

Click the + sign in the upper left corner to add a new configuration and choose Remote from the dropdown menu.

Add Configuration Dialog

Give the configuration a valid name and change the port number to 8000 (Tomcat defaults to 8000).  If debugging locally, keep localhost as the host; if debugging on a remote server, use the remote hostname.

Remote Tomcat Debugging Setup

Click on OK to save the new configuration.

To start a debug session, select Debug from the Run menu and select the debug configuration to launch.  The debug panel will be opened and the console should report it has connected to Tomcat.

Debugging Projects

If your project happens to be the matching Liferay source for the portal you're running, you should have all of the source available to start an actual debug session.  I'm usually in my own projects needing to understand what is going on in my code or the interaction of my code with the portal.

So when I'm ready to debug I have already built and deployed my module and I can set breakpoints in my code and start clicking around in the portal.

When one of your breakpoints is hit, Intellij will come to the front and the debugger will be front and center.  You can step through your code, set watches and view object, variable and parameter values.

Intellij will let you debug your code or any declared dependencies in your project.  But once you make a call to code that is not a dependency of your project, the debugger may lose visibility on where you actually are.

Fortunately there's an easy fix for this.  Choose Project Structure... from the File menu.  Select the Libraries item on the left.  The right side is where additional libraries can be added for debugging without affecting your actual project dependencies.

Click the + sign to add a new library.  Pick Java if you have a local source directory and/or jar file that you want to add or Maven if you want to download the dependency from the Maven repositories.  So, for example, you may want to add the portal-impl.jar file and link in the source directory to help debug against the core.  For the OSGi modules, you can add the individual jars or source dirs as you need them.

Add Libraries Dialog

Conclusion

So now you can debug Liferay/Tomcat remotely in Intellij.

Perhaps in a future blog I'll throw together a post about debugging Tomcat within Intellij instead of remotely...

Liferay 7 Development, Part 6

Technical Blogs September 14, 2016 By David H Nebinger

Introduction

In part 5 we started the portlet code.  We added the configuration support, started the portlet and added the PanelApp implementation to get the portlet in the control panel.

In this part we will be adding the view layer into the portlet and add the action handlers.  To complete these parts we'll be layering in the use of our Filesystem Access DS component.

We're also going to take a quick look at Lexicon and what it means for us average portlet developers and implement our portlet using the new Liferay MVC framework.

MVC Implementation Details

The new Liferay MVC takes on a lot of the grunt work for portlet implementations, and in this new iteration we're actually building OSGi components that leverage annotations to get everything done.

So let's take a look at one of the ActionCommand components to see how they work.  The TouchFileFolderMVCActionCommand is one of the simpler action commands in our portlet so this one will allow us to look at the aspects of ActionCommands without getting bogged down in the implementation code.

/**
 * class TouchFileFolderMVCActionCommand: Action command that handles the 'touch' of the file/folder.
 * @author dnebinger
 */
@Component(
	configurationPid = "com.liferay.filesystemaccess.portlet.config.FilesystemAccessPortletInstanceConfiguration",
	immediate = true,
	property = {
		"javax.portlet.name=" + FilesystemAccessPortletKeys.FILESYSTEM_ACCESS,
		"mvc.command.name=/touch_file_folder"
	},
	service = MVCActionCommand.class
)
public class TouchFileFolderMVCActionCommand extends BaseMVCActionCommand {

So here is a standard ActionCommand declaration.  The annotation identifies our configuration pid for accessing portlet instance config, the properties indicate this is an ActionCommand for our filesystem access portlet and the MVC command path to get to this action class, and the service declaration indicates this is an MVCActionCommand implementation.

The class itself extends the BaseMVCActionCommand so we don't have to implement all of the plumbing, just our necessary business logic.

	/**
	 * doProcessAction: Called to handle the touch file/folder action.
	 * @param actionRequest Request instance.
	 * @param actionResponse Response instance.
	 * @throws Exception in case of error.
	 */
	@Override
	protected void doProcessAction(
		ActionRequest actionRequest, ActionResponse actionResponse)
		throws Exception {

And this is the method declaration that needs to be implemented as an extension of BaseMVCActionCommand.

		// get the config instance
		FilesystemAccessPortletInstanceConfiguration config = getConfiguration(
			actionRequest);

		if (config == null) {
			logger.warn("No config found.");

			SessionErrors.add(
				actionRequest, MissingConfigurationException.class.getName());

			return;
		}

		// Extract the target and current path from the action request params.
		String touchName = ParamUtil.getString(actionRequest, Constants.PARM_TARGET);
		String currentPath = ParamUtil.getString(actionRequest, Constants.PARM_CURRENT_PATH);

		// get the real target path to use for the service call
		String target = getLocalTargetPath(currentPath, touchName);

		// use the service to touch the item.
		_filesystemAccessService.touchFilesystemItem(config.rootPath(), target);
	}

This next method demonstrates how an action command can get access to the portlet instance configuration.

	/**
	 * getConfiguration: Returns the configuration instance given the action request.
	 * @param request The request to get the config object from.
	 * @return FilesystemAccessPortletInstanceConfiguration The config instance.
	 */
	private FilesystemAccessPortletInstanceConfiguration getConfiguration(
		ActionRequest request) {

		// Get the theme display object from the request attributes
		ThemeDisplay themeDisplay = (ThemeDisplay)request.getAttribute(
			WebKeys.THEME_DISPLAY);

		// get the current portlet instance
		PortletInstance instance = PortletInstance.fromPortletInstanceKey(
			FilesystemAccessPortletKeys.FILESYSTEM_ACCESS);

		FilesystemAccessPortletInstanceConfiguration config = null;

		// use the configuration provider to get the configuration instance
		try {
			config = _configurationProvider.getPortletInstanceConfiguration(
				FilesystemAccessPortletInstanceConfiguration.class,
				themeDisplay.getLayout(), instance);
		} catch (ConfigurationException e) {
			logger.error("Error getting instance config.", e);
		}

		return config;
	}

	/**
	 * getLocalTargetPath: Returns the local target path.
	 * @param localPath The local path.
	 * @param target The target filename.
	 * @return String The local target path.
	 */
	private String getLocalTargetPath(final String localPath, final String target) {
		if (Validator.isNull(target)) {
			return null;
		}

		if (Validator.isNull(localPath)) {
			return StringPool.SLASH + target;
		}

		if (localPath.trim().endsWith(StringPool.SLASH)) {
			return localPath.trim() + target.trim();
		}

		return localPath.trim() + StringPool.SLASH + target.trim();
	}

Just as with our previous OSGi components, we rely on OSGi to inject services needed to implement the component functionality.

	/**
	 * setConfigurationProvider: Sets the configuration provider for config access.
	 * @param configurationProvider The config provider to use.
	 */
	@Reference
	protected void setConfigurationProvider(
		ConfigurationProvider configurationProvider) {

		_configurationProvider = configurationProvider;
	}

	/**
	 * setFilesystemAccessService: Sets the filesystem access service instance to use.
	 * @param filesystemAccessService The filesystem access service instance.
	 */
	@Reference(unbind = "-")
	protected void setFilesystemAccessService(
		final FilesystemAccessService filesystemAccessService) {
		_filesystemAccessService = filesystemAccessService;
	}

	private FilesystemAccessService _filesystemAccessService;

	private ConfigurationProvider _configurationProvider;

	private static final Log logger = LogFactoryUtil.getLog(
		TouchFileFolderMVCActionCommand.class);
}

As I started flushing out the other ActionCommand implementations, I quickly found that I was copying and pasting the getConfiguration() and getLocalTargetPath() methods.  I refactored those into a base class, BaseActionCommand, and changed all of the ActionCommand implementations to extend this base class, so don't be alarmed when the source differs from the listing above.

Serving resources is handled in a similar fashion.  Below is the declaration for the FileDownloadMVCResourceComand, the component which will handle serving the file as a Serve Resource handler.

/**
 * class FileDownloadMVCResourceCommand: A resource command class for returning files.
 * @author dnebinger
 */
@Component(
	immediate = true,
	property = {
		"javax.portlet.name=" + FilesystemAccessPortletKeys.FILESYSTEM_ACCESS,
		"mvc.command.name=/download-file"
	},
	service = MVCResourceCommand.class
)
public class FileDownloadMVCResourceCommand extends BaseMVCResourceCommand {

As with all Liferay MVC implementations, the view (render phase) is handled with JSP files.  JSP files do not have access to the OSGi injection mechanisms, so we have to use a different mechanism to get the OSGi injected resources and make them available to the JSP files.  We change the portlet class to handle this injection and pass through:

/**
 * class FilesystemAccessPortlet: This portlet is used to provide filesystem
 * access.  Allows an administrator to grant access to users to access local
 * filesystem resources, useful in those cases where the user does not have
 * direct OS access.
 *
 * This portlet will provide access to download, upload, view, 'touch' and
 * edit files.
 * @author dnebinger
 */
@Component(
	immediate = true,
	property = {
		"com.liferay.portlet.display-category=category.system.admin",
		"com.liferay.portlet.header-portlet-css=/css/main.css",
		"com.liferay.portlet.instanceable=false",
		"javax.portlet.name=" + FilesystemAccessPortletKeys.FILESYSTEM_ACCESS,
		"javax.portlet.display-name=Filesystem Access",
		"javax.portlet.init-param.template-path=/",
		"javax.portlet.init-param.view-template=/view.jsp",
		"javax.portlet.resource-bundle=content.Language",
		"javax.portlet.security-role-ref=power-user,user"
	},
	service = Portlet.class
)
public class FilesystemAccessPortlet extends MVCPortlet {

	/**
	 * render: Overrides the parent method to handle the injection of our
	 * service as a render request attribute so it is available to all of the
	 * jsp files.
	 *
	 * @param renderRequest The render request.
	 * @param renderResponse The render response.
	 * @throws IOException In case of error.
	 * @throws PortletException In case of error.
	 */
	@Override
	public void render(RenderRequest renderRequest, RenderResponse renderResponse) throws IOException, PortletException {

		// set the service as a render request attribute
		renderRequest.setAttribute(
			Constants.ATTRIB_FILESYSTEM_SERVICE, _filesystemAccessService);

		// invoke super class method to let the normal render operation run.
		super.render(renderRequest, renderResponse);
	}

	/**
	 * setFilesystemAccessService: Sets the filesystem access service instance to use.
	 * @param filesystemAccessService The filesystem access service instance.
	 */
	@Reference(unbind = "-")
	protected void setFilesystemAccessService(
		final FilesystemAccessService filesystemAccessService) {

		_filesystemAccessService = filesystemAccessService;
	}

	private FilesystemAccessService _filesystemAccessService;

	private static final Log logger = LogFactoryUtil.getLog(
		FilesystemAccessPortlet.class);
}

So here we let OSGi inject the references into the portlet instance class itself, and we override the render() method to pass our service references to the view layer as render request attributes.  In our init.jsp page, you'll find that the service reference instances are extracted from the render request attributes and turned into a variable that will be available to all JSP pages that include the init.jsp file.  In this way our JSPs have access to the injected services without having to go through the older Util classes to statically access the service reference.

So the only remarkable thing about the JSP files themselves is their location.  Instead of the old way of having the JSPs right next to the WEB-INF folder the way we used to build and deploy portlets, now they are actually built and shipped within the jar bundle by putting them into the resources/META-INF/resources directory; this is our new "root" path for all web assets.  So in this folder in our project we have all of our JSP files as well as a css folder with our css file.

Unfortunately I implemented the JSP files before Nate Cavinaugh's new blog entry, https://web.liferay.com/web/nathan.cavanaugh/blog/-/blogs/the-status-and-direction-of-the-frontend-infrastructure-in-liferay-7-dxp.  Had I waited, I might have known that my use of AUI may have been a bad decision.  But then I remembered that the bulk of the portal is written using AUI and that unless the UI is completely rewritten all the way down to the least significant portlet, AUI is going to remain for some time to come.

Oh yeah, Lexicon

I mentioned that I was going to talk about Lexicon in the portlet.  Lexicon (and Metal and Soy and ...) are really hot topics for pure front end developers.  I'm looking for more of the discussions for cross-functional developers, the average developers that are doing front-end and back-end and are looking for a suitable path to navigate between both worlds without falling down the rabbit holes both sides offer from time to time.  AUI has historically been that path, a tag library that cross-functional developers could use to leverage Liferay's use of YUI javascript without really having to learn all of the details in using the AUI/YUI javascript library.

So to start that conversation, i'm going to talk about one of the design choices I made and how Lexicon was necessary as a result.

My need was fairly simple - I wanted to show an "add file" button that, when clicked, would show a dialog to collect the filename.  The dialog would have an OK button (that would trigger the creation of the file) and a cancel button (that cancelled the add of the file).

So I needed a modal dialog, but how was I going to implement that?  The choices for modal dialog implementation, as we all know, are seemingly endless, but does Lexicon offer a solution?

The answer is Yes, there is a Lexicon solution.  The doco can be found here: http://liferay.github.io/lexicon/content/modals/

Now if you're like me, you read this whole page and can see how the front end guys are just eating this up.  Use some standard tags, decorate with some attributes and voila, you have yourself a modal dialog on a page.  But you're left wondering how you're going to drop this in your portlet jsp page, how you're going to wire it up to trigger calls to your backend, etc., and you think back wistfully remembering the AUI tag library and how you could do some JSP tags to get a similar outcome...  Ah, the good ole days...

Oh, sorry, back on topic.  So I was able to leverage the Lexicon modal dialog in my JSP page and, with some javascript help, got everything working the way I needed.  What I'm about to show is likely not the best way to have integrated Lexicon, I'm sure Nate and his team would be able to shoot holes all through this code, but like I said I'm looking for the discussion that covers how cross-functional developers will use Lexicon and if this starts that discussion, then it's worth sharing.

So here goes, these are the parts of view.jsp which handles the add file modal dialog:

<button class="btn btn-default" data-target="#<portlet:namespace/>AddFileModal" data-toggle="modal" id="<portlet:namespace/>showAddFileBtn"><liferay-ui:message key="add-file" /></button>
<div aria-labelledby="<portlet:namespace/>AddFileModalLabel" class="fade in lex-modal modal" id="<portlet:namespace/>AddFileModal" role="dialog" tabindex="-1">
	<div class="modal-dialog modal-lg">
		<div class="modal-content">
			<portlet:actionURL name="/add_file_folder" var="addFileActionURL" />

			<div id="<portlet:namespace/>fm3">
				<form action="<%= addFileActionURL %>" id="<portlet:namespace/>form3" method="post" name="<portlet:namespace/>form3">
					<aui:input name="<%= ActionRequest.ACTION_NAME %>" type="hidden" />
					<aui:input name="redirect" type="hidden" value="<%= currentURL %>" />
					<aui:input name="currentPath" type="hidden" value="<%= currentPath %>" />
					<aui:input name="addType" type="hidden" value="addFile" />

					<div class="modal-header">
						<button aria-labelledby="Close" class="btn btn-default close" data-dismiss="modal" role="button" type="button">
							<svg aria-hidden="true" class="lexicon-icon lexicon-icon-times">
								<use xlink:href="<%= themeDisplay.getPathThemeImages() + "/lexicon/icons.svg" %>#times" />
							</svg>
						</button>

						<button class="btn btn-default modal-primary-action-button visible-xs" type="button">
							<svg aria-hidden="true" class="lexicon-icon lexicon-icon-check">
								<use xlink:href="<%= themeDisplay.getPathThemeImages() + "/lexicon/icons.svg" %>#check" />
							</svg>
						</button>

						<h4 class="modal-title" id="<portlet:namespace/>AddFileModalLabel"><liferay-ui:message key="add-file" /></h4>
					</div>

					<div class="modal-body">
						<aui:fieldset>
							<aui:input autoFocus="true" helpMessage="add-file-help" id="addNameFile" label="add-file" name="addName" type="text" />
						</aui:fieldset>
					</div>

					<div class="modal-footer">
						<button class="btn btn-default close-modal" id="<portlet:namespace/>addFileBtn" name="<portlet:namespace/>addFileBtn" type="button"><liferay-ui:message key="add-file" /></button>
						<button class="btn btn-link close-modal" data-dismiss="modal" type="button"><liferay-ui:message key="cancel" /></button>
					</div>
				</form>
			</div>
		</div>
	</div>
</div>

So first I have the button that will trigger the display of the modal dialog.  The modal dialog is in the <div /> that follows.  The content div contains my AUI-based form but is decorated with appropriate Lexicon tags to add the dialog buttons.

There's also some javascript on the page that affects the dialog:

	var showAddFile = A.one('#showAddFileBtn');

	if (showAddFile) {
		showAddFile.after('click', function() {
			var addNameText = A.one('#addNameFile');
		
			if (addNameText) {
				addNameText.val('');
				addNameText.focus();
			}
		});
	}

	var addFileBtnVar = A.one('#addFileBtn');

	if (addFileBtnVar) {
		addFileBtnVar.after('click',function() {
			var fm = A.one('#form3');
		
			if (fm) {
				fm.submit();
			}
		});
	}

The first chunk is used to set focus on the name field after the dialog is displayed.  The second chunk triggers the submit of the form when the user clicks the okay button in the dialog.

The highlight of this code is that we get a modal dialog without really having to code any javascript, any JSP tags, etc.  We had to basically add necessary code to flush out the dialog content.

And I think that's really the essence of the Lexicon stuff; I think it's all going to work out to be a "standard" set of tags and attribute decorations that will render the UI, plus we'll have some code to put in at the JSP level to bind into our regular portlet code.

Conclusion

So here we are at the end of part 6 and I think I've covered it all...

We now have a finished project that satisfies all of our original requirements:

  • Must run on Liferay 7 CE GA2 (or newer).  Since we leveraged all Liferay 7 tools and are building module jars, we're definitely Liferay 7 compatible.
  • Must use Gradle for the build tool.  We set this up in part 2 of the blog series using the blade command line tool.
  • Must be full-on OSGi modules, no legacy stuff.  We also started this in part 2 and continued the module development through all other parts.
  • Must leverage Lexicon.  This was done in our modal dialog just introduced above.
  • Must leverage the new Configuration facilities.  The configuration facilities were added in part 5 as part of the initial portlet setup.
  • Must leverage the new Liferay MVC portlet framework.  The bulk of the Liferay MVC implementation was added in this blog part.
  • Must provide new Panel Application (side bar) support.  This was covered in part 5 of the blog series.

The original requirements have been satisfied, but how does it look?  Here's some screen shots to whet your appetite:

Main View

The main view lists files/folders that can be acted upon.  The path shows that I'm in /tomcat-8.0.32 but actually I'm off somewhere else in the filesystem.  Remember in a previous part I was using a "root path" to constrain filesystem access?  This shows that I am within a view sandbox that I cannot just sneak my way out of.  Even though we're exposing the underlying filesystem, we don't want to just throw out all semblances of security.

Add File Dialog

Our Lexicon-based modal dialog for adding a new file.

Upload File Dialog

The modal dialog even works for a file upload.

View File

The file view component is provided by the AUI ACE editor component.

Edit File

The edit file component is also provided by the ACE editor.

While not shown, the configuration panel allows the admin to set the "root path", enable/disable adds, uploads, downloads, deletes and edits.

That's pretty much it.  Hope you enjoyed the multi-part blog.  Feel free to comment below or, better yet, launch a discussion in the forums.

And remember, you can find the source code in github: https://github.com/dnebing/filesystem-access

Liferay 7 Development, Part 5

Technical Blogs September 14, 2016 By David H Nebinger

Introduction

In the first four parts we have introduced our project, laid out the Liferay workspace to create our modules, defined our DS service API and have just completed our DS service implementation.

It's now time to move on to starting our Filesystem Access Portlet.  With everything I want to do in this portlet, it's going to span multiple parts.  In this part we're going to start the portlet by tackling some key parts.

Note that this portlet is going to be a standard OSGi module, so we're not building a portlet war file or anything like that.  We're building an OSGi portlet module jar.

Configuration

So configuration is probably an odd place to start but it is a key for portlet design.  This is basically going to define the configurable parts of our portlet.  We're defining fields we'll allow an administrator to set and use that to drive the rest of the portlet.  Personally I've always found it easier to build in the necessary flexibility up front rather than getting down the road on the portlet development and try to retrofit it in later on.

For example, one of our configurable items is what I'm calling the "root path".  The root path is a fixed filesystem path that constrains where the users of the portlet can access.  And this constraint is enforced at all levels, it forms a layer of protection to ensure folks are not creating/editing files outside of this root path.  By starting with this as a configuration point, the rest of the development has to take this into account.  And we've seen this already in the DS API and service presented in the previous parts - every method in the API has the rootPath as the first argument (yes I had my configation parts figured out before I started any of the development).

So let's review our configuration elements:

Item Description
rootPath This is the root path that constrains all filesystem access.
showPermissions This is a flag whether to show permissions or not.  On windows systems, permissions don't really work so this flag can remove the non-functional permissions column.
deletesAllowed This is a flag that determines whether files/folders can be deleted or not.
uploadsAllowed This is a flag that determines whether file uploads are allowed.
downloadsAllowed This is a flag that determines whether file downloads are allowed.
editsAllowed This is a flag that determines whether inline editing is allowed.
addsAllowed This is a flag that determines whether file/folder additions are allowed.
viewSizeLimit This is a size limit that determines whether a file can be viewed in the browser.  This can impose an upper limit on generated HTML fragment size.
downloadableFolderSizeLimit This defines the size limit for downloading folders.  Since folders will be zipped live out of the filesystem, this can be used to ensure server resources are not overwhelmed creating a large zip stream in memory.
downloadableFolderItemLimit This defines the file count limit for downloadable folders.  This too is a mechanism to define an upper limit for server resource consumption.

Seeing this list and understanding how it will affect the interface, it should be pretty clear it's going to be much easier building that into the UI from the start rather than trying to retrofit it in later.

In previous versions of Liferay we would likely be using portlet preferences for these options, but since we're building for Liferay 7 we're going to take advantage of the new Configuration support.

We're going to start by creating a new package in our portlet, com.liferay.filesystemaccess.portlet.config (current Liferay practice is to put the configuration classes into a config package in your portlet project).

There are a bunch of classes that will be used for configuration, let's start with the central one, the configuration definition class FilesystemAccessPortletInstanceConfiguration:

/**
 * class FilesystemAccessPortletInstanceConfiguration: Instance configuration for
 * the portlet configuration.
 * @author dnebinger
 */
@ExtendedObjectClassDefinition(
	category = "platform",
	scope = ExtendedObjectClassDefinition.Scope.PORTLET_INSTANCE
)
@Meta.OCD(
	localization = "content/Language",
	name = "FilesystemAccessPortlet.portlet.instance.configuration.name",
	id = "com.liferay.filesystemaccess.portlet.config.FilesystemAccessPortletInstanceConfiguration"
)
public interface FilesystemAccessPortletInstanceConfiguration {

	/**
	 * rootPath: This is the root path that constrains all filesystem access.
	 */
	@Meta.AD(deflt = "${LIFERAY_HOME}", required = false)
	public String rootPath();

	// snip
}

There's a lot of stuff here, so let's dig in...

The @Meta annotations are from BND and define meta info on the class and the members.  The OCD annotation on the class defines the name of the configuration (using the portlet language bundle) and the ID for the configuration.  The ID is critical and is referenced elsewhere and must be unique across the portal, so the full class name is the current standard.  The AD annotation is used to define information about the individual fields.  We're defining the default values for the parameters and indicating that they are not required (since we have a default).

The @ExtendedObjectClassDefinition is used to define the section of the System Settings configuration panel.  The category (language bundle key) defines the major category the settings will be set from, and the scope defines whether the config is per portlet instance, per group, per company or system-wide.  We're going to use portlet instance scope so different instances can have their own configuration.

The next class is the FilesystemAccessPortletInstanceConfigurationAction class, the class that handles submits when the configuration is changed.  Instead of showing the whole class, I'm only going to show parts of the file that need some discussion.  The whole class is in the project in Github.

/**
 * class FilesystemAccessConfigurationAction: Configuration action for the filesystem access portlet.
 * @author dnebinger
 */
@Component(
	immediate = true,
	property = {
		"javax.portlet.name=" + FilesystemAccessPortletKeys.FILESYSTEM_ACCESS
	},
	service = ConfigurationAction.class
)
public class FilesystemAccessPortletInstanceConfigurationAction
	extends DefaultConfigurationAction {

	/**
	 * getJspPath: Return the path to our configuration jsp file.
	 * @param request The servlet request.
	 * @return String The path
	 */
	@Override
	public String getJspPath(HttpServletRequest request) {
		return "/configuration.jsp";
	}

	/**
	 * processAction: This is used to process the configuration form submission.
	 * @param portletConfig The portlet configuration.
	 * @param actionRequest The action request.
	 * @param actionResponse The action response.
	 * @throws Exception in case of error.
	 */
	@Override
	public void processAction(
		PortletConfig portletConfig, ActionRequest actionRequest,
		ActionResponse actionResponse)
		throws Exception {

		// snip
	}

	/**
	 * setServletContext: Sets the servlet context, use your portlet's bnd.bnd Bundle-SymbolicName value.
	 * @param servletContext The servlet context to use.
	 */
	@Override
	@Reference(
		target = "(osgi.web.symbolicname=com.liferay.filesystemaccess.web)", unbind = "-"
	)
	public void setServletContext(ServletContext servletContext) {
		super.setServletContext(servletContext);
	}
}

So the configuration action handler is actually a DS service.  It's using the @Component annotation and is implementing the ConfigurationAction service.  The parameter is the portlet name (so portlets map the correct configuration action handler).

The class returns it's own path to the JSP file used to show the configuration options.  The path returned is relative to the portlet's web root.

The processAction() method is used to process the values from the configuration form submit.  When you review the code you'll see it is extracting parameter values and saving preference values.

The class uses an OSGi injection using @Reference to inject the servlet context for the portlet.  The important part to note here is that the value must match the Bundle-SymbolicName value from the project's bnd.bnd file.

There are three other source files in this package that I'll describe briefly...

The FilesystemAccessDisplayContext class is a wrapper class to provide access to the configuration instance object in different portlet phases (i.e. Action or Render phases).  In some phases the regular PortletDisplay instance (a new object availble from the ThemeDisplay) can be used to get the instance config object, but in the Action phase the ThemeDisplay is not fully populated so this access fails.  The FilesystemAccessDisplayContext class provides access in all phases.

The FilesystemAccessPortletInstanceConfigurationBeanDeclaration class is a simple component to return the FilesystemAccessPortletInstanceConfiguration class so a configuration instance can be created on demand for new instances.

The FilesystemAccessPortletInstanceConfigurationPidMapping class maps the configuration class (FilesystemAccessPortletInstanceConfiguration) with the portlet id to again support dynamic creation and tracking of configuration instances.

The Portlet Class

Portlet classes are much smaller than what they used to be under Liferay MVC.  Here is the complete portlet class:

/**
 * class FilesystemAccessPortlet: This portlet is used to provide filesystem
 * access.  Allows an administrator to grant access to users to access local
 * filesystem resources, useful in those cases where the user does not have
 * direct OS access.
 *
 * This portlet will provide access to download, upload, view, 'touch' and
 * edit files.
 * @author dnebinger
 */
@Component(
	immediate = true,
	property = {
		"com.liferay.portlet.display-category=category.system.admin",
		"com.liferay.portlet.header-portlet-css=/css/main.css",
		"com.liferay.portlet.instanceable=false",
		"javax.portlet.display-name=Filesystem Access",
		"javax.portlet.init-param.config-template=/configuration.jsp",
		"javax.portlet.init-param.template-path=/",
		"javax.portlet.init-param.view-template=/view.jsp",
		"javax.portlet.resource-bundle=content.Language",
		"javax.portlet.security-role-ref=power-user,user"
	},
	service = Portlet.class
)
public class FilesystemAccessPortlet extends MVCPortlet {
}

It has no body at all.  Can't get any simpler than that...

There is no longer a portlet.xml file or a liferay-portlet.xml file.  Instead, these values are all provided through the properties on the DS component for the portlet.

The Panel App

Our portlet is a regular portlet that admins will be able to drop on any page they want.  However, we're also going to install the portlet as a Panel App, the new way to create a control panel.  We'll do this using the FilesystemAccessPanelApp class:

/**
 * class FilesystemAccessPanelApp: Component which exposes our portlet as a control panel app.
 * @author dnebinger
 */
@Component(
	immediate = true,
	property = {
		"panel.app.order:Integer=750",
		"panel.category.key=" + PanelCategoryKeys.CONTROL_PANEL_CONFIGURATION
	},
	service = PanelApp.class
)
public class FilesystemAccessPanelApp extends BasePanelApp {

	/**
	 * getPortletId: Returns the portlet id that will be in the control panel.
	 * @return String The portlet id.
	 */
	@Override
	public String getPortletId() {
		return FilesystemAccessPortletKeys.FILESYSTEM_ACCESS;
	}

	/**
	 * setPortlet: Injects the portlet into the base class, uses the actual portlet name for the lookup which
	 * also matches the javax.portlet.name value set in the portlet class annotation properties.
	 * @param portlet
	 */
	@Override
	@Reference(
		target = "(javax.portlet.name=" + FilesystemAccessPortletKeys.FILESYSTEM_ACCESS + ")",
		unbind = "-"
	)
	public void setPortlet(Portlet portlet) {
		super.setPortlet(portlet);
	}

}

The @Component annotation shows this is yet another DS service that implements the PanelApp class.  The panel.category.key value will put our portlet under the configuration section of the control panel and the high panel.app.order property will put our portlet near the bottom of the list.

The methods specified will ensure the base class has the Filesystem Access portlet references for the panel app to work.

The JSPs

We will update the init.jsp and add the configuration.jsp files.  Not much to see, pretty generic jsp implementations.  The init.jsp file pulls in all of the includes used in the other jsp files and copies the config into local member fields.  The configuration jsp file has the AUI form for all of the configuration elements.

Configure The Module

Our portlet is still in the process of being flushed out, but we'll wrap up this part of the blog by configuring and deploying the module in it's current form.

Edit the bundle file so it contains the following:

Bundle-Name: Filesystem Access Portlet
Bundle-SymbolicName: com.liferay.filesystemaccess.web

Bundle-Version: 1.0.0

Import-Package:\
	javax.portlet;version='[2.0,3)',\
	javax.servlet;version='[2.5,4)',\
	*

Private-Package: com.liferay.filesystemaccess.portlet
Web-ContextPath: /filesystem-access

-metatype: *

So here we're forcing the import of the portlet and servlet APIs, these ensure that dependencies of our dependencies are included.

We also declare that our portlet classes are all private.  This means that other folks will not be able to use us as a dependency and include our classes.

An important addition is the Web-ContextPath key.  This value is used while the portlet is running to define the context path to portlet resources.  Given the value above, the portal will make our resources available as /o/filesystem-access/..., so for example you could go to /o/filesystem-access/css/main.css to pull the static css file if necessary.

Deployment

Well the modifications are all done.  At a command prompt, go to the modules/apps/filesystem-access-web directory and execute the following command:

$ ../../../gradlew build

You should end up with your new com.liferay.filesystem.access.web-1.0.0.jar bundle file in the build/libs directory.  If you have a Liferay 7 CE or Liferay DXP tomcat environment running, you can drop the jar into the Liferay deploy folder.

Drop into the Gogo Shell and you can even verify that the module has started:

Welcome to Apache Felix Gogo

g! lb | grep Filesystem
  487|Active     |   10|Filesystem Access Service (1.0.0)
  488|Active     |   10|Filesystem Access API (1.0.0)
  489|Active     |   10|Filesystem Access Portlet (1.0.0)

If they are all active, you're in good shape.

Viewing in the Portal

For the first time in this blog series, we actually have something we can add in the portal.  Log in as an administrator to your portal instance and go to the add panel.  Under the Applications section you should find the System Administration group and in there is our new Filesystem Access portlet.  Grab it and drop it on the page.  You should see it render in the page.

So we haven't really done anything to the main view, but let's test what we did add.  First go to the ... menu and choose the Configuration element.  Although it probably isn't pretty, you should see your configuration panel there.  You can change values, click save, exit and come back in to see that they stick, ...  Basically the configuration should be working.

Next pull up the left panel and go to the Control Panel to the Configuration section.  You should see the Filesystem Access portlet at the bottom of the list (well, position depends upon whatever else you have installed, but in a clean bundle it will be at the bottom).  You can click on the option and you'll get your portlet again, but just the welcome message.  Not very impressive, but we'll get there.

You can also go to the System Settings control panel and you'll see a Platform tab at the top.  When you click on Platform, you should see Filesystem Access.  Click on it for the default configuration settings (used as defaults for new portlet instances).  This configuration will look different than your configuration.jsp because it's not using your configuration, it's a version of the form generated using just the FilesystemAccessPortletInstanceConfiguration class and the information in the Meta annotations to create the form.

Another cool thing you can try, create a com.liferay.filesystemaccess.portlet.config.FilesystemAccessPortletInstanceConfiguration.cfg file in the $LIFERAY_HOME/osgi/modules directory and you can define the default configuration values there.  The values in this file override the defaults from the @Meta annotations and allow you to use a deployable config you can use.

Conclusion

Well, our portlet is not done yet.  We have a good start on it, but we'll have yet more to add in the next part.  Remember the code for the project is up on Github and the code for this part is in the dev-part-5 branch.

In the next part we'll pick up on the portlet development and start building the real views.

Liferay 7 Development, Part 4

Technical Blogs September 14, 2016 By David H Nebinger

Introduction

In part 3 of the blog, the API for the Filesystem Access project was flushed out.

In this part of the blog, we'll create the service implementation module.

The Data Transfer Object

Now we get to implement our DTO object.  The interface is pretty simple, we just have to add all of the methods and expose values from data we retain.

The code will be available on GitHub so I won't go into great detail here.  Suffice it to say that for the most part it is exposing values from the underlying File object from the filesystem.

The Service

The service implementation is just as straight forward as the DTO.  It leverages the java.io.* packages and apis and also uses com.liferay.portal.kernel.util.FileUtil for some supporting functions.  It also integrates the use of the Liferay auditing mechanism to issue relevant audit messages.

The Annotations

It's really the annotations which will make the FilesystemAccessServiceImpl into a true DS service.

The first annotation is the Component annotation and it is used such as:

import org.osgi.service.component.annotations.Component;
import org.osgi.service.component.annotations.Reference;

/**
 * class FilesystemAccessServiceImpl: Implementation class of the FilesystemAccessService.
 *
 * @author dnebinger
 */
@Component(service = FilesystemAccessService.class)
public class FilesystemAccessServiceImpl implements FilesystemAccessService {

Here we are declaring that we have a DS component implementation that provides the FilesystemAccessService.  And boom, we're done.  We have just implemented a DS service.  Couldn't be easier, could it?

The other annotation import there is the Reference annotation.  We're going to be showing how to use the DS service in the next part of the blog, but we'll inject a preview here.

In the discussion of the service in the previous section, we mentioned that we were going to be integrating the Liferay audit mechanism so different filesystem access methods would be audited.  To integrate the audit mechanism, we need an instance of Liferay's AuditRouter service.  But how do we get the reference?  Well, AuditRouter is also implemented as a DS service.  We can have OSGi inject the service when our module is started using the Reference annotation such as:

/**
 * _auditRouter: We need a reference to the audit router
 */
@Reference
private AuditRouter _auditRouter;

And boom, we're done there too.

OSGi is handling all of the heavy lifting for us.  One annotation exposes our class as a service component implementation and another annotation will inject services which we have a dependency on.

Configure The Module

So we're not quite finished yet.  We have the code done, but we should configure our bundle so we get the right outcome.

Edit the bundle file so it contains the following:

Bundle-Name: Filesystem Access Service
Bundle-SymbolicName: com.liferay.filesystem.access.svc
Bundle-Version: 1.0.0
-sources: true

The Liferay standard is to give a meaningful name for the bundle, but the symbolic name should be something akin to the project package.  You definitely want the symbolic name to be unique when it is deployed to the OSGi container.

Deployment

Well the modifications are all done.  At a command prompt, go to the modules/apps/filesystem-access-svc directory and execute the following command:

$ ../../../gradlew build

You should end up with your new com.liferay.filesystem.access.svc-1.0.0.jar bundle file in the build/libs directory.  If you have a Liferay 7 CE or Liferay DXP tomcat environment running, you can drop the jar into the Liferay deploy folder.

Drop into the Gogo Shell and you can even verify that the module has started:

Welcome to Apache Felix Gogo

g! lb | grep Filesystem
  486|Active     |   10|Filesystem Access API (1.0.0)
  487|Active     |   10|Filesystem Access Service (1.0.0)

So we see that our module deployed and started correctly, so all is good.  So we have a good outcome because both of our modules have been deployed and started successfully.

When things go awry.  If you've only deployed the service module and not the api module, you'll see something like:

Welcome to Apache Felix Gogo

g! lb | grep Filesystem
  487|Installed  |   10|Filesystem Access Service (1.0.0)

This is really a bad outcome.  Installed simply means it's there in the OSGi container, but it won't be available.  We can try to start the module:

g! start 487
org.osgi.framework.BundleException: Could not resolve module: com.liferay.filesystem.access.svc [487]
  Unresolved requirement: Import-Package: com.liferay.filesystemaccess.api; version="[1.0.0,2.0.0)"

This is how you see why your module isn't started.  These are the kinds of messages you'll see most often, some sort of missing dependency that you'll have to satisfy before your module will start.  The worst part is that you won't see these errors in the logs either.  From the logs perspective you won't see a "STARTED" message for your module, but as we all know the lack of a message is not really a visible error that is easily resolved.

This current error is easy to fix - just deploy the api module.  As soon as it is started, OSGi will auto-start the service module, you won't have to start it yourself.  You can check that both modules are running by listing the beans again and you'll see they are both now marked as Active.

Conclusion

So we have our DS service completed.  The API module defining the service is out there and active.  We have an implementation of the service also deployed and active, too.  If there was a need for it, we could build alternative implementations of the DS service and deploy those also, but that's a discussion that should be saved for perhaps a different blog entry.

So we have our DS service, we just don't have a service client yet.  Well that's coming in the next part of the blog.  We're actually going to start building the filesystem access portlet and layer in our client code.  See you there...

 

Liferay 7 Development, Part 3

Technical Blogs September 14, 2016 By David H Nebinger

Introduction

In part 2 of the series we created our initial project modules for the Filesystem Access Portlet project.

In part 3, we're going to move on to flushing out the DS service that will provide a service layer for our portlet.

Admittedly this service layer is contrived; while building out the initial version of this portlet there was no separate service layer, no external modules, everything was implemented directly in the portlet itself.  The portlet worked just fine.

But that doesn't make a great example of how to build things for Liferay 7 so I refactored out a service tier.

The Service Tier

No, we're not building with ServiceBuilder (even though it is still supported under Liferay 7) as our service layer has no database component.

No, we're not building a service tier using the Fake Entity technique for ServiceBuilder, that's not really necessary anymore in the new Liferay 7 OSGi world.

We're building a straight DS service.  In OSGi, DS is an acronym for Declarative Services.  I don't want to do a deep dive into what DS is (especially since there are already some great ones out there like http://blog.vogella.com/2016/06/21/getting-started-with-osgi-declarative-services/), but suffice it to say we're building out a module to define a service (the filesystem-access-api module) along with an implementation module (the filesystem-access-svc module) and we're also going to have a service consumer (the filesystem-access-web module). DS is the runtime glue that will bring these three modules together.

With DS we get new capabilities in Liferay to define independent services and apis that are not database related.  So you get the chance to build out your own entities, build services to operate on those entities and modularize your code to separate these concerns.  And DS also supports wiring of components at runtime without having to implement tedious ServiceTrackers or other more complex constructs.

The Data Transfer Object

So our DTO is actually going to be quite simple.  We need a container for some filesystem data so we can pass complex data around.

The container poses a question - should we define our container as a class or as an interface.  Either works, but my own recommendation is:

  1. If the container can be a simple POJO w/o any business logic, then use a class.
  2. If the container includes business logic, then use an interface.

When you're just passing data around, a POJO class is a great container.  But as soon as you start building out some methods that have business logic code, you're better off using an interface if only to hide the implementation details from your service API consumers.

Because I've already implemented the code, I know that a simple POJO container is not going to work for what I'll be implementing, so we're going to use an interface for our container item.

In the filesystem-access-api module, add a com.liferay.filesystemaccess.api.FilesystemItem interface.  I'm not going to list all of the methods here, but basically it's going to look something like:

/**
 * class FilesystemItem: This is a container for an individual filesystem item.
 *
 * @author dnebinger
 */
public interface FilesystemItem extends Serializable {

	/**
	 * getAbsolutePath: Returns the filesystem absolute path to the item.
	 * @return String The absolute path.
	 */
	public String getAbsolutePath();

	/**
	 * getData: Returns the byte data from the file.
	 * @return byte array of file data.
	 * @throws IOException in case of read failure.
	 */
	public byte[] getData() throws IOException;

	/**
	 * getDataStream: Returns an InputStream for the file data.
	 * @return InputStream The data input stream.
	 * @throws FileNotFoundException in case of read error.
	 */
	public InputStream getDataStream() throws FileNotFoundException;

	...
}

Notice how we can return complex objects, unlike how we are limited in ServiceBuilder code.  Pretty cool, huh?

So remember how I want to use an interface if there is business logic in play?  The getDataStream() method will have business logic in it for returning an input stream for the file, so an interface is the best route.

The Service

The service is going to be a basic interface listing out all of our methods.  We'll add these to the com.liferay.filesystem.api.FilesystemAccessService class that we created in part 2.  The interface we'll be implementing is:

/**
 * class FilesystemAccessService: The service interface for the filesystem
 * access.
 *
 * NOTE: The methods below refer to the *root path*.  The portlet will allow
 * an admin to constrain access to a given, fixed path.  This is the root path.
 * Whatever the user attempts, they will always be constrained within the root
 * path.
 *
 * @author dnebinger
 */
public interface FilesystemAccessService {

	/**
	 * countFilesystemItems: Counts the items at the given path.
	 * @param rootPath The root path.
	 * @param localPath The local path to the directory.
	 * @return int The count of items at the given path.
	 */
	public int countFilesystemItems(
		final String rootPath, final String localPath);

	/**
	 * createFilesystemItem: Creates a new filesystem item at the given path.
	 * @param rootPath The root path.
	 * @param localPath The local path for the new item.
	 * @param name The name for the new item.
	 * @param directory Flag indicating create a directory or a file.
	 * @return FilesystemItem The new item.
	 * @throws PortalException in case of error.
	 */
	public FilesystemItem createFilesystemItem(
			final String rootPath, final String localPath, final String name,
			final boolean directory)
		throws PortalException;

	/**
	 * deleteFilesystemItem: Deletes the target item.
	 * @param rootPath String The root path.
	 * @param localPath The local path to the item to delete.
	 * @throws PortalException In case of delete error.
	 */
	public void deleteFilesystemItem(
			final String rootPath, final String localPath)
		throws PortalException;

	/**
	 * getFilesystemItem: Returns the FilesystemItem at the given path.
	 * @param rootPath The root path.
	 * @param localPath The local path to the item.
	 * @param timeZone The time zone for the access.
	 * @return FilesystemItem The found item.
	 */
	public FilesystemItem getFilesystemItem(
		final String rootPath, final String localPath,
		final TimeZone timeZone);

	/**
	 * getFilesystemItems: Retrieves the list of items at the given path,
	 * includes support for scrolling through the items.
	 * @param rootPath The root path.
	 * @param localPath The local path to the directory.
	 * @param startIdx The starting index of items to return.
	 * @param endIdx The ending index of items to return.
	 * @param timeZone The time zone for the access.
	 * @return List The list of FilesystemItems.
	 */
	public List getFilesystemItems(
		final String rootPath, final String localPath, final int startIdx,
		final int endIdx, final TimeZone timeZone);

	/**
	 * touchFilesystemItem: Touches the target item, updating the modified
	 * timestamp.
	 * @param rootPath The root path.
	 * @param localPath The local path to the item.
	 */
	public void touchFilesystemItem(
		final String rootPath, final String localPath);

	/**
	 * updateContent: Updates the file content.
	 * @param rootPath The root path.
	 * @param localPath The local path to the item to update.
	 * @param content String The new content to write.
	 * @throws PortalException In case of write error.
	 */
	public void updateContent(
			final String rootPath, final String localPath, final String content)
		throws PortalException;

	/**
	 * updateDownloadable: Updates the downloadable flag for the item.  For
	 * directories, they will be downloadable if the total bytes to zip is less
	 * than the given value and contains fiewer than the given number of files.
	 *
	 * These limits allow the administrator to prevent building zip files of
	 * downloads of a directory if building the zip file would overwhelm
	 * available resources.
	 * @param filesystemItem The item to update the downloadable flag on.
	 * @param maxUnzippedSize The max number of bytes to allow for download,
	 *                        for a directory it's the max total unzipped bytes.
	 * @param maxZipFiles The max number of files which can be zipped when
	 *                    checking download of a directory.
	 */
	public void updateDownloadable(
		final FilesystemItem filesystemItem, final long maxUnzippedSize,
		final int maxZipFiles);

	/**
	 * uploadFile: This method handles the upload and store of a file.
	 * @param rootPath The root path.
	 * @param localPath The local path where the file will go.
	 * @param filename The filename for the file.
	 * @param target The uploaded, temporary file.
	 * @param timeZone The user's time zone for modification date display.
	 * @return FilesystemItem The newly loaded filesystem item.
	 * @throws PortalException in case of error.
	 */
	public FilesystemItem uploadFile(
		final String rootPath, final String localPath, final String filename,
		final File target, final TimeZone timeZone) throws PortalException;
}

So this interface allows us to get file items, create and delete them, update the contents, ...  Pretty much everything we need to support for the filesystem access portlet.

Additional Sources

There are some additional files in the com.liferay.filesystemaccess.api package.  These include some exception classes and a constant class.

There is also a com.liferay.filesystemaccess.audit package that contains some classes used for integrating auditing into the portlet.  Since we're exposing the underlying filesystem, we're going to add audit support so we can track who does what.  The classes in this package will help facilitate the auditing.

Configure The Module

So we're not quite finished yet.  We have the code done, but we should configure our bundle so we get the right outcome.

Edit the bundle file so it contains the following:

Bundle-Name: Filesystem Access API
Bundle-SymbolicName: com.liferay.filesystem.access.api
Bundle-Version: 1.0.0
Export-Package:\
    com.liferay.filesystemaccess.api,\
    com.liferay.filesystemaccess.audit
-sources: true

The Liferay standard is to give a meaningful name for the bundle, but the symbolic name should be something akin to the project package.  You definitely want the symbolic name to be unique when it is deployed to the OSGi container.

Since we have created the API, we'll declare the export package(s) for classes that others may consume.

Conclusion

Well the modifications are all done.  At a command prompt, go to the modules/apps/filesystem-access-api directory and execute the following command:

$ ../../../gradlew build

You should end up with your new com.liferay.filesystem.access.api-1.0.0.jar bundle file in the build/libs directory.  If you have a Liferay 7 CE or Liferay DXP tomcat environment running, you can drop the jar into the Liferay deploy folder.

Drop into the Gogo Shell and you can even verify that the module has started:

Welcome to Apache Felix Gogo

g! lb | grep Filesystem
  486|Active     |   10|Filesystem Access API (1.0.0)

So we see that our module deployed and started correctly, so all is good.

In the next part of the blog, we'll go about implementing the service module.

Liferay 7 Development, Part 2

Technical Blogs September 13, 2016 By David H Nebinger

Introduction

In part 1, the filesystem access portlet project for Liferay 7 was introduced.

In this part, we're going to use Blade and Gradle to set up our basic project structure.

We're not going to do much in the way of code in this part, it's going to be enough that we get all of our modules defined and ready for development in future parts of the blog.

The Tools - Gradle  https://gradle.org/

So if you've been in Liferay development for a while, I'm sure you know the history of Liferay's build tool selection, but we'll review anyway.

First there was Ant.  Of course, when Liferay started using Ant, everybody was using Ant.  It was really the only game in town.  And with Ant came the Liferay SDK and it's rigid structure for where projects had to be and how they had to be structured.

And that was it, for a long time, much longer than necessary.  The rest of the world had moved on to Maven, discarding the verbose and declarative nature of the Ant build.xml file, but Liferay stuck with Ant and the SDK...

With Liferay 6.1, however, Liferay started to catch up and the Liferay Maven artifacts started coming out.  It was rough at first, but by the time we were on to 6.2 Maven support was all there and we were finally able to discard the rigid SDK approach and free our projects from holding all of the dependency jars.

But the development world kept moving on, discarding Maven for cooler build tools with awesome features and flexibility that Maven couldn't offer.

Instead of slowly integrating a new tool in, Liferay went all-in with the introduction of Gradle support in Liferay 7.

Liferay is adopting Gradle for its projects (modularization of the core, side projects, etc.) so it's a really safe bet that Gradle support is going to be around for a while.

Me, I'm much more pragmatic; if Liferay is going to use this tool for their projects, I want to use the tool too; I don't want to be the odd ball looking for support on something they may not be interested in supporting some time in the future.

So Gradle it shall be.  Go ahead and download and install Gradle on your system; I'm not going to go into details here, but you really shouldn't have any problems installing it and their documentation and other documentation online will be able to help you get the job done.

The Gradle Wrapper

I want to take a moment and mention something you'll often see in Gradle projects, the gradlew[.bat] files and the gradle directory.  These guys are the Gradle Wrapper.

The wrapper is basically a fixed version of Gradle that is installed and run locally within the project directory.  Why, when you can install Gradle, would you want to do such a thing?  Well, if you're passing a project to a developer that doesn't have Gradle installed (like you when you pull down Liferay 7 projects for the first time), you may not have Gradle installed; the wrapper allows you to build the project anyway.  It's also going to stick to the version of Gradle used to install the wrapper; this removes version-specific issues from the build process since the right version of Gradle will be with the project regardless of what version the developer has.

If you're not creating projects, you probably won't need to set up the Gradle Wrapper.  If you're creating a project to share with others, when you initialize the Gradle project you'll get the gradle wrapper in there too.

After the wrapper is generated, you're going to want to use the wrapper from that point forward.  Instead of issuing the command "gradle build" to do a build, instead you'd use the wrapper version, "gradlew build"

The Tools - Blade  https://github.com/liferay/liferay-blade-cli

Okay, some explanation here before we get too far in; there's like two Liferay projects which you may find referred to as Blade.

The one we're going to use, the Blade CLI, well we'll get to that one in just a moment.

The other one is the Blade Samples project, https://github.com/liferay/liferay-blade-samples.  The Blade Samples project is a big repository of Liferay 7 modules, a set of 31 (currently), and all of these provided using different build tools (BND, Gradle, Liferay's Gradle plugin, and Maven).  You can use to, say, master some of the OSGi module stuff using your favorite build tool and, once you understand that, can review the implementation using a different build tool so you could see how to translate what you know to the new build tool.  All of that said, we won't really come back to the Blade Samples again, but I did want to highlight it as it can be a valuable resource for you in the future.

Back to the Blade CLI.  This command line tool is a utility to help scaffold together your gradle-based Liferay modules.  It can generate the foundation for you so you can jump right into flushing out your code.

If you haven't already done so, install the Blade CLI per the instructions on the Blade website.  If you have already installed it, you should update it using the command:

[sudo] blade update

Note you only need the sudo portion if you are on a system that has a sudo command; if you don't know if you do have this command, it is likely that you do not have it.

The Project Structure

One of our requirements is to build full OSGi-compliant Liferay 7 modules.  That's a mouth full, but it draws out an important point.

OSGi-compliant modules means we're going to be building jars.  OSGi bundles, actually, but in Liferay we call them modules because, well, they actually will include some Liferay specific stuff and have some Liferay runtime container dependencies.  Liferay calls these guys modules, so we will too.  When someone says bundle, just know they're talking about modules.

Now if you're not familiar w/ what that really means, well then it's time to start learning about them.  If I had to boil it down to something simple, I guess I'd say it's pretty close in concept to the Spring Framework - small pieces of functionality that are wired together at runtime to build a functional application.  Obviously it is a lot bigger than that with multiple classloaders, a slew of runtime dynamic configuration support, etc.

But thinking along the lines of Spring we can come up with a pretty good architecture for our portlet implementation.

Now we could just put all the code into one bundle and everything would work, it would be simple, and it would get the job done.  But if we're really going to learn something here, let's over-architect it as it will make a better educational experience.

So, thinking along the Spring lines, we know we're going to have some sort of "service" that we'll use to hold our file system access business logic.  Like in Spring, we'll separate that into an API module (the Spring interfaces) and the Service module (the concrete implementation classes).  And yes, we still have our portlet to create, so that will be a module too.

To get back on task, our project will consist of one set of parent build files and 3 modules.

Building The Project Structure

Okay, we have the background for the tools behind us, time to build out the structure, so let's get to it.

So first you need a directory somewhere where you're going to be creating everything.  I'm going to create the "filesystem-access" folder in my local projects directory; every other step below and in the other parts of this blog will all be taking place within this folder.

Step 1 - Set Up Workspace

The first thing we need to do is initialize the Blade/Gradle environment in our project directory:

$ blade init
Note that this command will fail if there are files in the directory, so you want to do this before any other steps.

The structure that you get here is actually referred to as the "Liferay Workspace".  The Liferay Workspace is a container for building all of your plugins.  It's kind of like the old SDK except it is based on Gradle (not Ant) and only distinguishes between modules and themes (unlike all of the different plugin types supported by the SDK).

We'll be using this workspace as our foundation for the project.  For a project that will be multiple modules, the Liferay Workspace is a good foundation.

Step 2 - Create The Modules

We will be switching to the Blade CLI to create out modules.  Change to the modules/apps directory (you may need to create the apps directory in modules) and use the following commands will be executed to create the modules:

$ blade create -t api -p com.liferay.filesystemaccess filesystem-access-api
$ blade create -t api -p com.liferay.filesystemaccess.svc filesystem-access-svc
$ blade create -t mvcportlet -p com.liferay.filesystemaccess -c FilesystemAccessPortlet filesystem-access-web

Step 3 - Clean Up The Modules

Since we created two projects from the api template, we're going to take some time to undo that effort.

  • In the filesystem-access-api module, add a blank interface, com.liferay.filesystemaccess.api.FilesystemAccessService (we'll fill it out in the next blog).
  • In the filesystem-access-api module, delete the com.liferay.filesystemaccess.api.FilesystemAccessApiActivator class.
  • In the filesystem-access-svc module, change the package from com.liferay.filesystemaccess.svc.api to com.liferay.filesystemaccess.svc.internal.
  • In the filesystem-access-svc module, add a blank class, com.liferay.filesystemaccess.svc.internal.FilesystemAccessServiceImpl (we'll fill it out in the next blog).
  • In the filesystem-access-svc module, delete the com.liferay.filesystemaccess.svc.FilesystemAccessServiceActivator class.

Conclusion

Okay, so far we haven't done very much, but let's review anyway:

  • We reviewed a little about Gradle and Blade.
  • We created a Liferay Workspace to hold our modules.
  • We created our initial yet empty modules.

In the next blog post we'll get into some real development stuff, building out a DS service layer...

The project has been pushed up to GitHub, you can access it here: https://github.com/dnebing/filesystem-access

The files for this part of the blog are in branch part-2: https://github.com/dnebing/filesystem-access/tree/part-2

Liferay 7 Development, Part 1

Technical Blogs September 13, 2016 By David H Nebinger

Introduction

So I've been doing some LR7 development recently.  It's an exciting and challenging platform to get your head around and I thought I would just work up an example portlet, spread it across a number of blog entries, and hopefully deliver something useful in the process.

And that's what I'm going to do here.  Not only is the blog going to cover all of the step by step sort of stuff, but the project itself is going to be available on GitHub for you to pull and review at your leisure.

Let's dive in...

The Project

First off, I cannot take credit for the project idea, I'm saying that up front so there's no confusion.

I was recently working in an environment where developers had no direct access to the development server - no way to view or edit files, no way to download log files to review what was going on, no way to tweak property files, ...  It wasn't so much that we couldn't actually do these kinds of things (the sys admin would let us look at their terminal viewing files, the SA would make whatever changes we asked, the SA would give us the logs as is, ...), the developers were just not allowed to have command line access on the server.

So this planted a seed that started with the question, has anyone exposed access to the filesystem in the portal before?

Actually they have.  I found Jan Eerdekens' blog about a portlet he created: https://web.liferay.com/web/fimez/blog/-/blogs/pandora-s-box-for-liferay-lfm-portlet

So cool, there was precedent, but Jan's portlet is for Liferay 6.2, and that wasn't going to work for me with 7.

So the seed has grown into the blog project: Create a filesystem access portlet.

So now I have a plant (the project), but it needs some nurturing to get it to bear fruit.

The Specifications

Our portlet has a basic set of specifications that it must have:

  • Must be able to list files/folders from the filesystem.
  • Must be able to view file contents within the browser, preferably with syntax highlighting.
  • Must be able to edit file contents within the browser, preferably with syntax highlighting.
  • Must be able to add new files and folders.
  • Must be able to delete files and folders.
  • Must be able to 'touch' files and folders, updating the last modified dates for the files and folders.
  • Must be able to download files and directories (as zip files) from the server.
  • Must be able to upload files to the server.
  • Must be configurable so these features can be disabled, allowing an admin to choose a subset of these features which are allowed.

That's a pretty sweet set of specifications.  The good news is that they are all pretty easy to knock out.

The Requirements

There are some additional requirements for our project:

  • Must run on Liferay 7 CE GA2 (or newer).
  • Must use Gradle for the build tool.
  • Must be full-on OSGi modules, no legacy stuff.
  • Must leverage Lexicon.
  • Must leverage the new Configuration facilities.
  • Must leverage the new Liferay MVC portlet framework.
  • Must provide new Panel Application (side bar) support.

So these are the real fun parts of the project.  We're going to be using the new Liferay build tools.  We're going to use OSGi.  We're going to leverage the new Liferay MVC framework and all of the other goodies.  We're going to see what role Lexicon plays in our app.

One additional requirement - the project must be available on GitHub.  This is going to be our tool for learing LR7 development, so we're going to share.

Conclusion

Well, that's it for part 1.  The project has been introduced, so we all know where we're going and whether the outcome is successful.

Look forward to seeing you in the next installment...

 

Liferay 7, Service Builder and External Databases

Technical Blogs July 13, 2016 By David H Nebinger

So I'm a long-time supporter of ServiceBuilder.  I saw its purpose way back on Liferay 4 and 5 and have championed it in the forums and here in my blog.

With the release of Liferay 7, ServiceBuilder has undergone a few changes mostly related to the OSGi modularization.  ServiceBuilder will now create two modules, one API module (comparable to the old service jar w/ the interfaces) and a service module (comparable to the service implementation that used to be part of a portlet).

But at it's core, it still does a lot of the same things.  The service.xml file defines all of the entities, you "buildService" (in gradle speak) to rebuild the generated code, consumers still use the API module and your implementation is encapsualted in the service module.  The generated code and the Liferay ServiceBuilder framework are built on top of Hibernate so all of the same Spring and Hibernate facets still apply.  All of the features used in the past are also supported, including custom SQL, DynamicQuery, custom Finders and even External Database support.

External Database support is still included for ServiceBuilder, but there are some restrictions and setup requirements that are necessary to make them work under Liferay 7.

Examples are a good way to work through the process, so I'm going to present a simple ServiceBuilder component that will be tracking logins in an HSQL database separate from the Liferay database.  That last part is obviously contrived since one would not want to go to HSQL for anything real, but you're free to substitute any supported DB for the platform you're targeting.

The Project

So I'll be using Gradle, JDK 1.8 and Liferay CE 7 GA2 for the project.  Here's the command to create the project:

blade create -t servicebuilder -p com.liferay.example.servicebuilder.extdb sb-extdb

This will create a ServiceBuilder project with two modules:

  • sb-extdb-api: The API module that consumers will depend on.
  • sb-extdb-service: The service implementation module.

The Entity

So the first thing we need to define is our entity.  The service.xml file is in the sb-extdb-service module, and here's what we'll start with:

<?xml version="1.0"?>
<!DOCTYPE service-builder PUBLIC "-//Liferay//DTD Service Builder 7.0.0//EN" "http://www.liferay.com/dtd/liferay-service-builder_7_0_0.dtd">

<service-builder package-path="com.liferay.example.servicebuilder.extdb">

  <!-- Define a namespace for our example -->
  <namespace>ExtDB</namespace>

  <!-- Define an entity for tracking login information. -->
  <entity name="UserLogin" uuid="false" local-service="true" remote-service="false" data-source="extDataSource" >
  <!-- session-factory="extSessionFactory" tx-manager="extTransactionManager" -->

    <!-- userId is our primary key. -->
    <column name="userId" type="long" primary="true" />

    <!-- We'll track the date of last login -->
    <column name="lastLogin" type="Date" />

    <!-- We'll track the total number of individual logins for the user -->
    <column name="totalLogins" type="long" />

    <!-- Let's also track the longest time between logins -->
    <column name="longestTimeBetweenLogins" type="long" />

    <!-- And we'll also track the shortest time between logins -->
    <column name="shortestTimeBetweenLogins" type="long" />
  </entity>
</service-builder>

This is a pretty simple entity for tracking user logins.  The user id will be the primary key and we'll track dates, times between logins as well as the user's total logins.

Just as in previous versions of Liferay, we must specify the external data source for our entity/entities.

ServiceBuilder will create and manage tables only for the Liferay DataBase.  ServiceBuilder will not manage the tables, indexes, etc. for any external databases.

In our particular example we're going to be wiring up to HSQL, so I've taken the steps to create the HSQL script file with the table definition as:

CREATE MEMORY TABLE PUBLIC.EXTDB_USERLOGIN(
    USERID BIGINT NOT NULL PRIMARY KEY,
    LASTLOGIN TIMESTAMP,
    TOTALLOGINS BIGINT,
    LONGESTTIMEBETWEENLOGINS BIGINT,
    SHORTESTTIMEBETWEENLOGINS BIGINT);

The Service

The next thing we need to do is build the services.  In the sb-extdb-service directory, we'll need to build the services:

gradle buildService

Eventually we're going to build out our post login hook to manage this tracking, so we can guess that we could use a method to simplify the login tracking.  Here's the method that we'll add to UserLoginLocalServiceImpl.java:

public class UserLoginLocalServiceImpl extends UserLoginLocalServiceBaseImpl {
  private static final Log logger = LogFactoryUtil.getLog(UserLoginLocalServiceImpl.class);

  /**
  * updateUserLogin: Updates the user login record with the given info.
  * @param userId User who logged in.
  * @param loginDate Date when the user logged in.
  */
  public void updateUserLogin(final long userId, final Date loginDate) {
    UserLogin login;

    // first try to get the existing record for the user
    login = fetchUserLogin(userId);

    if (login == null) {
      // user has never logged in before, need a new record
      if (logger.isDebugEnabled()) logger.debug("User " + userId + " has never logged in before.");

      // create a new record
      login = createUserLogin(userId);

      // update the login date
      login.setLastLogin(loginDate);

      // initialize the values
      login.setTotalLogins(1);
      login.setShortestTimeBetweenLogins(Long.MAX_VALUE);
      login.setLongestTimeBetweenLogins(0);

      // add the login
      addUserLogin(login);
    } else {
      // user has logged in before, just need to update record.

      // increment the logins count
      login.setTotalLogins(login.getTotalLogins() + 1);

      // determine the duration time between the current and last login
      long duration = loginDate.getTime() - login.getLastLogin().getTime();

      // if this duration is longer than last, update the longest duration.
      if (duration > login.getLongestTimeBetweenLogins()) {
        login.setLongestTimeBetweenLogins(duration);
      }

      // if this duration is shorter than last, update the shortest duration.
      if (duration < login.getShortestTimeBetweenLogins()) {
        login.setShortestTimeBetweenLogins(duration);
      }

      // update the last login timestamp
      login.setLastLogin(loginDate);

      // update the record
      updateUserLogin(login);
    }
  }
}

After adding the method, we'll need to build services again for the method to get into the API.

Defining The Data Source Beans

So we now need to define our data source beans for the external data source.  We'll create an XML file, ext-db-spring.xml, in the sb-extdb-service/src/main/resources/META-INF/spring directory.  When our module is loaded, the Spring files in this directory will get processed automatically into the module's Spring context.

<?xml version="1.0"?>

<beans
    default-destroy-method="destroy"
    default-init-method="afterPropertiesSet"
    xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd"
>

  <!--
    NOTE: Current restriction in LR7's handling of external data sources requires us to redefine the
    liferayDataSource bean in our spring configuration.

    The following beans define a new liferayDataSource based on the jdbc.ext. prefix in portal-ext.properties.
   -->
  <bean class="com.liferay.portal.dao.jdbc.spring.DataSourceFactoryBean" id="liferayDataSourceImpl">
    <property name="propertyPrefix" value="jdbc.ext." />
  </bean>

  <bean class="org.springframework.jdbc.datasource.LazyConnectionDataSourceProxy" id="liferayDataSource">
    <property name="targetDataSource" ref="liferayDataSourceImpl" />
  </bean>

  <!--
    So our entities are all appropriately tagged with the extDataSource, we'll alias the above
    liferayDataSource so it matches the entities.
   -->

  <alias alias="extDataSource" name="liferayDataSource" />
</beans>

These bean definitions are a big departure from the classic way of using an external data source.  Previously we would define separate data source beans from the Liferay Data Source beans, but under Liferay 7 we must redefine the Liferay Data Source to point at our external data source.

This has a couple of important side effects:

Only one data source can be used in a single ServiceBuilder module.  If you have three different external data sources, you must create three different ServiceBuilder modules, one for each data source.
The normal Liferay transaction management limits the scope of transactions to the current module.  To manage transactions that cross ServiceBuilder modules, you must define and use XA transactions.
When building modules for Service Builder / Spring Extender, classes referenced in Spring xml files will not be declared as OSGi import requirements leading to ClassNotFoundExceptions.  The packages, such as org.springframework.jdbc.datasource and com.liferay.portal.dao.jdbc.spring must be manually imported in the bnd.bnd file.

The last line, the alias line, this line defines a Spring alias for the liferayDataSource as your named data source in your service.xml file.

So, back to our example.  We're planning on writing our records into HSQL, so we need to add the properties to the portal-ext.properties for our external datasource connection:

# Connection details for the HSQL database
jdbc.ext.driverClassName=org.hsqldb.jdbc.JDBCDriver
jdbc.ext.url=jdbc:hsqldb:${liferay.home}/data/hypersonic/logins;hsqldb.write_delay=false
jdbc.ext.username=sa
jdbc.ext.password=

The Post Login Hook

So we'll use blade to create the post login hook.  In the sb-extdb main directory, run blade to create the module:

blade create -p com.liferay.example.servicebuilder.extdb.event -t service -s com.liferay.portal.kernel.events.LifecycleAction sb-extdb-postlogin

Since blade doesn't know we're really adding a sub module, it has created a full standalone gradle project.  While not shown here, I modified a number of the gradle project files to make the postlogin module a submodule of the project.

We'll create the com.liferay.example.servicebuilder.extdb.event.UserLoginTrackerAction with the following details:

/**
 * class UserLoginTrackerAction: This is the post login hook to track user logins.
 *
 * @author dnebinger
 */
@Component(
  immediate = true, property = {"key=login.events.post"},
  service = LifecycleAction.class
)
public class UserLoginTrackerAction implements LifecycleAction {

  private static final Log logger = LogFactoryUtil.getLog(UserLoginTrackerAction.class);

  /**
   * processLifecycleEvent: Invoked when the registered event is triggered.
   * @param lifecycleEvent
   * @throws ActionException
   */
  @Override
  public void processLifecycleEvent(LifecycleEvent lifecycleEvent) throws ActionException {

    // okay, we need the user login for the event
    User user = null;

    try {
      user = PortalUtil.getUser(lifecycleEvent.getRequest());
    } catch (PortalException e) {
      logger.error("Error accessing login user: " + e.getMessage(), e);
    }

    if (user == null) {
      logger.warn("Could not find the logged in user, nothing to track.");

      return;
    }

    // we have the user, let's invoke the service
    getService().updateUserLogin(user.getUserId(), new Date());

    // alternatively we could just use the local service util:
    // UserLoginLocalServiceUtil.updateUserLogin(user.getUserId(), new Date());
  }

  /**
   * getService: Returns the user tracker service instance.
   * @return UserLoginLocalService The instance to use.
   */
  public UserLoginLocalService getService() {
    return _serviceTracker.getService();
  }

  // use the OSGi service tracker to get an instance of the service when available.
  private ServiceTracker<UserLoginLocalService, UserLoginLocalService> _serviceTracker =
    ServiceTrackerFactory.open(UserLoginLocalService.class);
}

Checkpoint: Testing

At this point we should be able to build and deploy the api module, the service module and the post login hook module.  We'll use the gradle command:

gradle build

In each of the submodules you'll find a build/libs directory where the bundle jars are.  Fire up your version of LR7CEGA2 (make sure the jdbc.ext properties are in portal-ext.properties file before starting) and put the jars in the $LIFERAY_HOME/deploy folder.  Liferay will pick them up and deploy them.

Drop into the gogo shell and check your modules to ensure they are started.

Log into the portal a few times and you should be able to find the database in the data directory and browse the records to see what it contains.

Conclusion

Using external data sources with Liferay 7's ServiceBuilder is still supported.  It's still a great tool for building a db-based OSGi module, still allows you to generate a bulk of the DB access code while encapsulating behind an API in a controlled manner.

We reviewed the new constraints on ServiceBuilder imposed by Liferay 7:

  • Only one (external) data source per Service Builder module.
  • The external data source objects, the tables, indexes, etc., must be manually managed.
  • For a transaction to span multiple Service Builder modules, XA transactions must be used.

You can find the GitHub project code for this blog here: https://github.com/dnebing/sb-extdb

OSGi Module Dependencies

Technical Blogs July 6, 2016 By David H Nebinger

It's going to happen.  At some point in your LR7 development, you're going to build a module which has runtime dependencies.  How do you satisfy those dependencies though?

In this brief blog entry I'll cover some of the options available...

So let's say you have a module which depends upon iText (and it's dependencies).  It doesn't really matter what your module is doing, but you have this dependency and now have to figure out how to satisfy it.

Option 1 - Make Them Global

This is probably the easiest option but is also probably the worst.  Any jars that are in the global class loader (Tomcat's lib and lib/ext, for example), are classes that can be accessed anywhere, including within the Liferay OSGi container.

But global jars have the typical global problems.  Not only do they need to be global, but all of their dependencies must also be global.  Also global classes are the only versions available, you can't really vary them to allow different consumers to leverage different versions.

Option 2 - Let OSGi Handle Them

This is the second easiest option, but it's likely to not work.  If you declare a runtime dependency in your module and if OSGi has a bundle that satisfies the dependency, it will be automatically available to your module.

This will work when you know the dependency can be satisfied, either because you're leveraging something the portal provides or you've actually deployed the dependency into the OSGi container (some jars also conveniently include OSGi bundle information and can be deployed directly into the container).

For our example, however, it is unlikely that iText will have already been deployed into OSGi as a module, so relying on OSGi to inject it may not end well.

Declaring the runtime dependency is going to be handled in your build.gradle file.  Here's a snippet for the iText runtime dependency:

runtime group: 'com.iowagie', name: 'itext', version: '1.4.8'

If iText (and it's dependencies) have been successfully deployed as an OSGi bundle, your runtime declaration will ensure it is available to your module.  If iText is not available, your module will not start and will report unsatisfied dependencies.

Option 3 - Make An Uber Module

Just like uber jars, uber modules will have all of the dependent classes exploded out of their original jars and are available within the module jar.

This is actually quite easy to do using Gradle and BND.

In your build.gradle file, you should declare your runtime dependencies just as you did for Option 2.

To make the uber module, you also need to include the resources in your bnd.bnd file:

Include-Resource: @itext-1.4.8.jar

So here you include the name of the dependent jar, usually you can see what it is when Gradle is downloading the dependency or by browsing your maven repository.

Note that you must also include any dependent jars in your include statement.  For example, iText 2.0.8 has dependencies on BouncyCastle mail and prov, so those would need to be added:

Include-Resource: @itext-2.0.8.jar,@bcmail-138.jar,@bcprov-138.jar

You may need to add these as runtime dependencies so Gradle will have them available for inclusion.

If you use a zip tool to crack open your module jar, you'll see that all of the individual jars have been exploded and all classes are in the jar.

Option 4 - Include the Jars in the Module

The last option is to include the jars in the module itself, not as an uber module, but just containing the jar files within the module jar.

Similar to option 2 and 3, you will declare your runtime dependencies in the build.gradle file.

The bulk of the work is going to be done in the bnd.bnd file.

First you need to define the Bundle-ClassPath attribute to include classes in the module jar but also the extra dependency jars.  In the example below, I'm indicating that my iText jar will be in a lib directory within the module jar:

Bundle-ClassPath:\
  .,\
  lib/itext.jar

Rather than use the Include-Resource header, we're going to use the -includeresource directive to pull the jars into the bundle:

-includeresource:\
  lib/itext.jar=itext-1.4.8.jar

In this format we're saying that lib/itext.jar will be pulled in from itext-1.4.8.jar (which is one of our runtime dependencies so Gradle will have it available for the build).

This format also supports the use of wildcards so you can leave version selection to the build.gradle file.  Here's an example for including any version of commons-lang:

-includeresource:\
  lib/itext.jar=itext-1.4.8.jar,\
  lib/commons-lang.jar=commons-lang-[0-9]*.jar

If you use a zip tool to crack open your module jar, you'll find there are jars now in the bundle under the lib directory.

Conclusion

So which of these options should you choose?  As with all things Liferay, it depends.

The global option is easy as long as you don't need different versions of jars but have a lot of dependencies on the jar.  For example, if you had 20 different modules all dependent upon iText 1.4.8, global may be the best path with regards to runtime resource consumption.

Option 2 can be an easy solution if the dependent jar is also an OSGi bundle.  In this case you can allow for multiple versions and don't have to worry about bnd file editing.

Option 3 and 4 are going to be the most common route to choose however.  In both of these cases your dependencies are included within the module so the OSGi's class loader is not polluted with different versions of dependent jars.  They are also environment-agnostic; since the modules contain all of their dependencies, the environment does not need to be prepared prior to module deployment.

Personally I stick with Option 4 - uber jars will tend to step on each other when expanding the jars that contain a same path/file in each (usually xml or config info).  Option 4 doesn't suffer from these sorts of issues.

Enjoy!

Using Gravatars...

General Blogs October 14, 2015 By David H Nebinger

So I'm here late one night and I'm wondering why Liferay is hosting profile pics...

I mean, in this day and age we all have accounts all over the place and many of those places also support profile pics/avatars.  In each of these sites when we create accounts we go and pull our standard avatar pic in and move on.

Recently, however, I was working with a site that used an avatar web service and I thought "wow, this is cool.".

Basically the fine folks at http://www.gravatar.com have created this site where they host avatar images for you.  As a user you create an account (keyed off your email address) and then load your avatar pic.

Then, on any site that points to gravatar and knows your email address, they can display your gravatar image.  You don't have to have a copy on the separate sites anymore!

And from a user perspective, you get the benefit of being able to change your avatar on one site and all of the other gravatar-aware sites will update automatically.

So my first thought was how I could bring this to Liferay...

Liferay Avatars

So the Liferay profile pics/avatars are all handled by the getPortraitURL() method of the User model class.  At first blush this would seem like an easy way to override Liferay's default portrait URL handling.

All we need to do is create a class which extends UserWrapper and overrides the getPortraitURL() method.  I'm including the code below that will actually do the job:

/**
 * GRAVATAR_BASE_URL: The base URL to use for non-secure gravatars.
 */
public static final String GRAVATAR_BASE_URL = "http://www.gravatar.com/avatar/";
/**
 * GRAVATAR_SECURE_BASE_URL: The base URL to use for secure gravatars.
 */
public static final String GRAVATAR_SECURE_BASE_URL = "https://secure.gravatar.com/avatar/";

/**
 * getPortraitURL: Overriding method to return the gravatar URL instead of the portal's URL.
 * @param themeDisplay
 * @return String The gravatar URL.
 * @throws PortalException
 * @throws SystemException
 */
@Override
public String getPortraitURL(ThemeDisplay themeDisplay) throws PortalException, SystemException {
	String emailAddress = getEmailAddress();

	String hash = "00000000000000000000000000000000";

	if ((emailAddress == null) || (emailAddress.trim().length() < 1)) {
		// no email address

	} else {
		hash = md5Hex(emailAddress.trim().toLowerCase());
	}

	String def = super.getPortraitURL(themeDisplay);

	StringBuilder sb = new StringBuilder();

	boolean secure = StringUtil.equalsIgnoreCase(Http.HTTPS, PropsUtil.get(PropsKeys.WEB_SERVER_PROTOCOL));

	if (secure) {
		sb.append(GRAVATAR_SECURE_BASE_URL);
	} else {
		sb.append(GRAVATAR_BASE_URL);
	}

	// add the hash value
	sb.append(hash);

	if ((def != null) && (def.trim().length() > 0)) {
		// add the default param
		try {
			String url = URLEncoder.encode(def.trim(), "UTF-8");
			sb.append("?d=").append(url);
		} catch (UnsupportedEncodingException e) {
		}
	}

	return sb.toString();
}

/**
 * hex: Utility function from gravatar to hex a byte array.
 * @param array
 * @return String The hex string.
 * @link https://en.gravatar.com/site/implement/images/java/
 */
public static String hex(byte[] array) {
	StringBuffer sb = new StringBuffer();

	for (int i = 0; i < array.length; ++i) {
		sb.append(Integer.toHexString((array[i]	& 0xFF) | 0x100).substring(1,3));
	}

	return sb.toString();
}

/**
 * md5Hex: Utility function to create an MD5 hex string from a message.
 * @param message
 * @return String The md5 hex.
 * @link https://en.gravatar.com/site/implement/images/java/
 */
public static String md5Hex (String message) {
	try {
		MessageDigest md = MessageDigest.getInstance("MD5");
		return hex (md.digest(message.getBytes("CP1252")));
	} catch (NoSuchAlgorithmException e) {
	} catch (UnsupportedEncodingException e) {
	}

	return null;
}

The great part about this implementation is that if the user does not have a gravatar account, the Liferay profile pic will end up being used instead.

Liferay Service Wrappers

So now you have a UserWrapper extension class that uses gravatar.com, but how do you get Liferay to use it?

Well you create one or more service wrappers and override the necessary methods so your UserWrapper will be returned.

Sounds easy, right?  Well it isn't.

The first obvious wrapper to create is the UserLocalServiceWrapper extension class.  That's where the User records are returned, so that's a good place to start.

First we'll add some support methods to handle wrapping the users:

/**
 * wrap: Handles the wrapping of a single instance.
 * @param user
 * @return User The wrapped user.
 */
protected User wrap(final User user) {
	if (user == null) return null;

	// if the user is already wrapped, no reason to wrap again, just return it.
	if (ClassUtils.isAssignable(user.getClass(), GravatarUserWrapper.class)) {
		return user;
	}

	return new GravatarUserWrapper(user);
}

/**
 * wrap: Handles the wrapping of multiple instances in a list.
 * @param users
 * @return List The list of wrapped users.
 */
protected List<User> wrap(final List<User> users) {
	if ((users == null) || (users.isEmpty())) return users;

	// create a list to put the wrapped users in to return.
	List<User> toReturn = new ArrayList<User>(users.size());

	for (User user : users) {
		toReturn.add(wrap(user));
	}

	return toReturn;
}

That's actually the easy part.  The next part is to attempt override of each method that returns a User instance or a List of users:

 

@Override
public User addDefaultAdminUser(long companyId, String screenName, String emailAddress, Locale locale, String firstName, String middleName, String lastName) throws PortalException, SystemException {
	return wrap(super.addDefaultAdminUser(companyId, screenName, emailAddress, locale, firstName, middleName, lastName));
}

@Override
public User addUser(long creatorUserId, long companyId, boolean autoPassword, String password1, String password2, boolean autoScreenName, String screenName, String emailAddress, long facebookId, String openId, Locale locale, String firstName, String middleName, String lastName, int prefixId, int suffixId, boolean male, int birthdayMonth, int birthdayDay, int birthdayYear, String jobTitle, long[] groupIds, long[] organizationIds, long[] roleIds, long[] userGroupIds, boolean sendEmail, ServiceContext serviceContext) throws PortalException, SystemException {
	return wrap(super.addUser(creatorUserId, companyId, autoPassword, password1, password2, autoScreenName, screenName, emailAddress, facebookId, openId, locale, firstName, middleName, lastName, prefixId, suffixId, male, birthdayMonth, birthdayDay, birthdayYear, jobTitle, groupIds, organizationIds, roleIds, userGroupIds, sendEmail, serviceContext));
}

@Override
public User addUser(User user) throws SystemException {
	return wrap(super.addUser(user));
}

@Override
public User addUserWithWorkflow(long creatorUserId, long companyId, boolean autoPassword, String password1, String password2, boolean autoScreenName, String screenName, String emailAddress, long facebookId, String openId, Locale locale, String firstName, String middleName, String lastName, int prefixId, int suffixId, boolean male, int birthdayMonth, int birthdayDay, int birthdayYear, String jobTitle, long[] groupIds, long[] organizationIds, long[] roleIds, long[] userGroupIds, boolean sendEmail, ServiceContext serviceContext) throws PortalException, SystemException {
	return wrap(super.addUserWithWorkflow(creatorUserId, companyId, autoPassword, password1, password2, autoScreenName, screenName, emailAddress, facebookId, openId, locale, firstName, middleName, lastName, prefixId, suffixId, male, birthdayMonth, birthdayDay, birthdayYear, jobTitle, groupIds, organizationIds, roleIds, userGroupIds, sendEmail, serviceContext));
}

@Override
public User fetchUser(long userId) throws SystemException {
	return wrap(super.fetchUser(userId));
}

@Override
public User fetchUserByEmailAddress(long companyId, String emailAddress) throws SystemException {
	return wrap(super.fetchUserByEmailAddress(companyId, emailAddress));
}

@Override
public User fetchUserByFacebookId(long companyId, long facebookId) throws SystemException {
	return wrap(super.fetchUserByFacebookId(companyId, facebookId));
}

@Override
public User fetchUserById(long userId) throws SystemException {
	return wrap(super.fetchUserById(userId));
}

@Override
public User fetchUserByOpenId(long companyId, String openId) throws SystemException {
	return wrap(super.fetchUserByOpenId(companyId, openId));
}

@Override
public User fetchUserByScreenName(long companyId, String screenName) throws SystemException {
	return wrap(super.fetchUserByScreenName(companyId, screenName));
}

@Override
public User fetchUserByUuidAndCompanyId(String uuid, long companyId) throws SystemException {
	return wrap(super.fetchUserByUuidAndCompanyId(uuid, companyId));
}

@Override
public List<User> getCompanyUsers(long companyId, int start, int end) throws SystemException {
	return wrap(super.getCompanyUsers(companyId, start, end));
}

@Override
public User getDefaultUser(long companyId) throws PortalException, SystemException {
	return wrap(super.getDefaultUser(companyId));
}

@Override
public List<User> getGroupUsers(long groupId) throws SystemException {
	return wrap(super.getGroupUsers(groupId));
}

@Override
public List<User> getGroupUsers(long groupId, int start, int end) throws SystemException {
	return wrap(super.getGroupUsers(groupId, start, end));
}

@Override
public List<User> getGroupUsers(long groupId, int start, int end, OrderByComparator orderByComparator) throws SystemException {
	return wrap(super.getGroupUsers(groupId, start, end, orderByComparator));
}

@Override
public List<User> getInheritedRoleUsers(long roleId, int start, int end, OrderByComparator obc) throws PortalException, SystemException {
	return wrap(super.getInheritedRoleUsers(roleId, start, end, obc));
}

@Override
public List<User> getNoAnnouncementsDeliveries(String type) throws SystemException {
	return wrap(super.getNoAnnouncementsDeliveries(type));
}

@Override
public List<User> getNoContacts() throws SystemException {
	return wrap(super.getNoContacts());
}

@Override
public List<User> getNoGroups() throws SystemException {
	return wrap(super.getNoGroups());
}

@Override
public List<User> getOrganizationUsers(long organizationId) throws SystemException {
	return wrap(super.getOrganizationUsers(organizationId));
}

@Override
public List<User> getOrganizationUsers(long organizationId, int start, int end) throws SystemException {
	return wrap(super.getOrganizationUsers(organizationId, start, end));
}

@Override
public List<User> getOrganizationUsers(long organizationId, int start, int end, OrderByComparator orderByComparator) throws SystemException {
	return wrap(super.getOrganizationUsers(organizationId, start, end, orderByComparator));
}

@Override
public List<User> getRoleUsers(long roleId) throws SystemException {
	return wrap(super.getRoleUsers(roleId));
}

@Override
public List<User> getRoleUsers(long roleId, int start, int end) throws SystemException {
	return wrap(super.getRoleUsers(roleId, start, end));
}

@Override
public List<User> getRoleUsers(long roleId, int start, int end, OrderByComparator orderByComparator) throws SystemException {
	return wrap(super.getRoleUsers(roleId, start, end, orderByComparator));
}

@Override
public List<User> getSocialUsers(long userId1, long userId2, int start, int end, OrderByComparator obc) throws PortalException, SystemException {
	return wrap(super.getSocialUsers(userId1, userId2, start, end, obc));
}

@Override
public List<User> getSocialUsers(long userId1, long userId2, int type, int start, int end, OrderByComparator obc) throws PortalException, SystemException {
	return wrap(super.getSocialUsers(userId1, userId2, type, start, end, obc));
}

@Override
public List<User> getSocialUsers(long userId, int start, int end, OrderByComparator obc) throws PortalException, SystemException {
	return wrap(super.getSocialUsers(userId, start, end, obc));
}

@Override
public List<User> getSocialUsers(long userId, int type, int start, int end, OrderByComparator obc) throws PortalException, SystemException {
	return wrap(super.getSocialUsers(userId, type, start, end, obc));
}

@Override
public List<User> getTeamUsers(long teamId) throws SystemException {
	return wrap(super.getTeamUsers(teamId));
}

@Override
public List<User> getTeamUsers(long teamId, int start, int end) throws SystemException {
	return wrap(super.getTeamUsers(teamId, start, end));
}

@Override
public List<User> getTeamUsers(long teamId, int start, int end, OrderByComparator orderByComparator) throws SystemException {
	return wrap(super.getTeamUsers(teamId, start, end, orderByComparator));
}

@Override
public User getUser(long userId) throws PortalException, SystemException {
	return wrap(super.getUser(userId));
}

@Override
public User getUserByContactId(long contactId) throws PortalException, SystemException {
	return wrap(super.getUserByContactId(contactId));
}

@Override
public User getUserByEmailAddress(long companyId, String emailAddress) throws PortalException, SystemException {
	return wrap(super.getUserByEmailAddress(companyId, emailAddress));
}

@Override
public User getUserByFacebookId(long companyId, long facebookId) throws PortalException, SystemException {
	return wrap(super.getUserByFacebookId(companyId, facebookId));
}

@Override
public User getUserById(long companyId, long userId) throws PortalException, SystemException {
	return wrap(super.getUserById(companyId, userId));
}

@Override
public User getUserById(long userId) throws PortalException, SystemException {
	return wrap(super.getUserById(userId));
}

@Override
public User getUserByOpenId(long companyId, String openId) throws PortalException, SystemException {
	return wrap(super.getUserByOpenId(companyId, openId));
}

@Override
public User getUserByPortraitId(long portraitId) throws PortalException, SystemException {
	return wrap(super.getUserByPortraitId(portraitId));
}

@Override
public User getUserByScreenName(long companyId, String screenName) throws PortalException, SystemException {
	return wrap(super.getUserByScreenName(companyId, screenName));
}

@Override
public User getUserByUuidAndCompanyId(String uuid, long companyId) throws PortalException, SystemException {
	return wrap(super.getUserByUuidAndCompanyId(uuid, companyId));
}

@Override
public List<User> getUserGroupUsers(long userGroupId) throws SystemException {
	return wrap(super.getUserGroupUsers(userGroupId));
}

@Override
public List<User> getUserGroupUsers(long userGroupId, int start, int end) throws SystemException {
	return wrap(super.getUserGroupUsers(userGroupId, start, end));
}

@Override
public List<User> getUserGroupUsers(long userGroupId, int start, int end, OrderByComparator orderByComparator) throws SystemException {
	return wrap(super.getUserGroupUsers(userGroupId, start, end, orderByComparator));
}

@Override
public List<User> getUsers(int start, int end) throws SystemException {
	return wrap(super.getUsers(start, end));
}

@Override
public List<User> search(long companyId, String firstName, String middleName, String lastName, String screenName, String emailAddress, int status, LinkedHashMap<String, Object> params, boolean andSearch, int start, int end, OrderByComparator obc) throws SystemException {
	return wrap(super.search(companyId, firstName, middleName, lastName, screenName, emailAddress, status, params, andSearch, start, end, obc));
}

@Override
public List<User> search(long companyId, String keywords, int status, LinkedHashMap<String, Object> params, int start, int end, OrderByComparator obc) throws SystemException {
	return wrap(super.search(companyId, keywords, status, params, start, end, obc));
}

@Override
public User updateAgreedToTermsOfUse(long userId, boolean agreedToTermsOfUse) throws PortalException, SystemException {
	return wrap(super.updateAgreedToTermsOfUse(userId, agreedToTermsOfUse));
}

@Override
public User updateEmailAddress(long userId, String password, String emailAddress1, String emailAddress2) throws PortalException, SystemException {
	return wrap(super.updateEmailAddress(userId, password, emailAddress1, emailAddress2));
}

@Override
public User updateEmailAddress(long userId, String password, String emailAddress1, String emailAddress2, ServiceContext serviceContext) throws PortalException, SystemException {
	return wrap(super.updateEmailAddress(userId, password, emailAddress1, emailAddress2, serviceContext));
}

@Override
public User updateEmailAddressVerified(long userId, boolean emailAddressVerified) throws PortalException, SystemException {
	return wrap(super.updateEmailAddressVerified(userId, emailAddressVerified));
}

@Override
public User updateFacebookId(long userId, long facebookId) throws PortalException, SystemException {
	return wrap(super.updateFacebookId(userId, facebookId));
}

@Override
public User updateIncompleteUser(long creatorUserId, long companyId, boolean autoPassword, String password1, String password2, boolean autoScreenName, String screenName, String emailAddress, long facebookId, String openId, Locale locale, String firstName, String middleName, String lastName, int prefixId, int suffixId, boolean male, int birthdayMonth, int birthdayDay, int birthdayYear, String jobTitle, boolean updateUserInformation, boolean sendEmail, ServiceContext serviceContext) throws PortalException, SystemException {
	return wrap(super.updateIncompleteUser(creatorUserId, companyId, autoPassword, password1, password2, autoScreenName, screenName, emailAddress, facebookId, openId, locale, firstName, middleName, lastName, prefixId, suffixId, male, birthdayMonth, birthdayDay, birthdayYear, jobTitle, updateUserInformation, sendEmail, serviceContext));
}

@Override
public User updateJobTitle(long userId, String jobTitle) throws PortalException, SystemException {
	return wrap(super.updateJobTitle(userId, jobTitle));
}

@Override
public User updateLastLogin(long userId, String loginIP) throws PortalException, SystemException {
	return wrap(super.updateLastLogin(userId, loginIP));
}

@Override
public User updateLockout(User user, boolean lockout) throws PortalException, SystemException {
	return wrap(super.updateLockout(user, lockout));
}

@Override
public User updateLockoutByEmailAddress(long companyId, String emailAddress, boolean lockout) throws PortalException, SystemException {
	return wrap(super.updateLockoutByEmailAddress(companyId, emailAddress, lockout));
}

@Override
public User updateLockoutById(long userId, boolean lockout) throws PortalException, SystemException {
	return wrap(super.updateLockoutById(userId, lockout));
}

@Override
public User updateLockoutByScreenName(long companyId, String screenName, boolean lockout) throws PortalException, SystemException {
	return wrap(super.updateLockoutByScreenName(companyId, screenName, lockout));
}

@Override
public User updateModifiedDate(long userId, Date modifiedDate) throws PortalException, SystemException {
	return wrap(super.updateModifiedDate(userId, modifiedDate));
}

@Override
public User updateOpenId(long userId, String openId) throws PortalException, SystemException {
	return wrap(super.updateOpenId(userId, openId));
}

@Override
public User updatePassword(long userId, String password1, String password2, boolean passwordReset) throws PortalException, SystemException {
	return wrap(super.updatePassword(userId, password1, password2, passwordReset));
}

@Override
public User updatePassword(long userId, String password1, String password2, boolean passwordReset, boolean silentUpdate) throws PortalException, SystemException {
	return wrap(super.updatePassword(userId, password1, password2, passwordReset, silentUpdate));
}

@Override
public User updatePasswordManually(long userId, String password, boolean passwordEncrypted, boolean passwordReset, Date passwordModifiedDate) throws PortalException, SystemException {
	return wrap(super.updatePasswordManually(userId, password, passwordEncrypted, passwordReset, passwordModifiedDate));
}

@Override
public User updatePasswordReset(long userId, boolean passwordReset) throws PortalException, SystemException {
	return wrap(super.updatePasswordReset(userId, passwordReset));
}

@Override
public User updatePortrait(long userId, byte[] bytes) throws PortalException, SystemException {
	return wrap(super.updatePortrait(userId, bytes));
}

@Override
public User updateReminderQuery(long userId, String question, String answer) throws PortalException, SystemException {
	return wrap(super.updateReminderQuery(userId, question, answer));
}

@Override
public User updateScreenName(long userId, String screenName) throws PortalException, SystemException {
	return wrap(super.updateScreenName(userId, screenName));
}

@Override
public User updateStatus(long userId, int status, ServiceContext serviceContext) throws PortalException, SystemException {
	return wrap(super.updateStatus(userId, status, serviceContext));
}

@Override
public User updateUser(User user) throws SystemException {
	return wrap(super.updateUser(user));
}

@Override
public User updateUser(long userId, String oldPassword, String newPassword1, String newPassword2, boolean passwordReset, String reminderQueryQuestion, String reminderQueryAnswer, String screenName, String emailAddress, long facebookId, String openId, String languageId, String timeZoneId, String greeting, String comments, String firstName, String middleName, String lastName, int prefixId, int suffixId, boolean male, int birthdayMonth, int birthdayDay, int birthdayYear, String smsSn, String aimSn, String facebookSn, String icqSn, String jabberSn, String msnSn, String mySpaceSn, String skypeSn, String twitterSn, String ymSn, String jobTitle, long[] groupIds, long[] organizationIds, long[] roleIds, List<UserGroupRole> userGroupRoles, long[] userGroupIds, ServiceContext serviceContext) throws PortalException, SystemException {
	return wrap(super.updateUser(userId, oldPassword, newPassword1, newPassword2, passwordReset, reminderQueryQuestion, reminderQueryAnswer, screenName, emailAddress, facebookId, openId, languageId, timeZoneId, greeting, comments, firstName, middleName, lastName, prefixId, suffixId, male, birthdayMonth, birthdayDay, birthdayYear, smsSn, aimSn, facebookSn, icqSn, jabberSn, msnSn, mySpaceSn, skypeSn, twitterSn, ymSn, jobTitle, groupIds, organizationIds, roleIds, userGroupRoles, userGroupIds, serviceContext));
}

Okay, although this seems like a lot of methods, at least we're done now, right?

Eh, not so fast...

There are still some holes in our implementation.  For example, any DynamicQuery that is returning users would not be wrapped.  Any custom queries elsewhere that are returning users would not be wrapped.  Any other service that is returning a User or Users that don't go through the UserLocalService, well those would not be wrapped either.

Conclusion

But actually I think we're in pretty good shape.  We're hitting most of the major points where User objects are being returned, all of those users will be appropriately wrapped to return the gravatar URLs for the portrait pics...

A final suggestion I would leave with you - if you're using gravatar.com you may not want to support changing profile pics anymore.  All you should need to do is add to your portal-ext.properties file a line like:

field.editable.domains[portrait]=

Any user that has an email address that matches the domain specified in this line will be able to update the portrait.  When it's empty like above, users should not be able to edit the portrait.

Should, but not definite, because there's other field.editable properties which have higher priority than the specific field check, like field.editable.roles=administrator allows admins to edit all fields including the portrait image.

But that is just a matter of training your admins not to edit the portraits and you should be fine.

Anyway you'll want to build this as a service hook plugin and deploy into your environment and your users should start seeing their gravatar profile pics.

 

Vaadin 7 Control Panel is Compatible with Vaadin 7.5.0

General Blogs July 6, 2015 By David H Nebinger

Just a quick note...

Vaadin 7.5.0 was released at the end of June, 2015.

I've completed an initial round of testing, and Vaadin 7 Control Panel version 1.0.3.0 is totally compatible with 7.5.0.  All themes compiled and so did the widgetsets.

If you run into any problems with 7.5.0 compatibility, please let me know...

Content Creation is Not a Development Activity!

General Blogs June 26, 2015 By David H Nebinger

Let me say that again:

Content Creation is Not a Development Activity!

Pretty strong statement, but for for those considering Liferay as a platform it's really an important concept, one that lives in the core of Liferay.

Note: This is really a different kind of blog post for me.  I prefer to keep things technical and discuss practical solutions to common problems.  But I've actually seen this problem come up in many projects and I'm sure many of you have seen it too.  So forgive me while I climb up on my soap box and preach for a while...

The Authors Dilema

Since the first cave drawings, authors have been able to express themselves using basic tools to put thought to paper (or some other transferrable media).  The only barriers to creating content was the availability of the basic instruments.

In the modern era, publishing became a structured process that applied to most physical print media.  Whether a book, a magazine or a newspaper, the following roles are involved:

  • Writer - Responsible for authoring the content.
  • Editor - Approves content and edits for length, content, format, syntax and spelling.
  • Illustrator - Provides supporting images if it's illustrated.
  • Production Designer - Handles page layout and general design.
  • Printer - Generates the final physical media.

With the invention of the world wide web, publishing on the web still needed to follow this standard process (in lieu of the printer, instead you have the web browser).  After all, it is still publishing even though it's on a digital platform.

The dilema for these roles for web publishing is that the simple, basic tools that were used in the past were replaced with complex computer languages and required software developers (and often development teams).  Even the initial HTML-only pages still required computer knowledge, tools and skills, things that the average content authors don't have.

Unfortunately, adding software developers into the content authoring process brought with it the software development process.

Publishing and the Software Development Process

After a lot of pain and suffering, software development lifecycles and processes were defined and implemented to reduce or eliminate bugs.  An experienced developer knows and appreciates the separate environments for development, testing, UAT and production.  A development team's goal is to push the promotion of the software artifacts through each successive enviroment and eventually make it to production.

The publishing process actually overlays onto this development process without too much contention, especially given the original nature of web publishing.

Remember way back in 1995 when we first started creating web pages?  I do so I guess that dates me a little.  At that point in time we were creating complete HTML pages with content wrapped in <b>, <i>, <u> and (god forbid) <blink> tags.  We'd connect all of the pages with simple <a> tags and had some <img> tags too.  Pages were extremely simple, but more important, an HTML page contained everything.

We quickly moved away from simple HTML and started doing some JSP pages, some PHP pages, some perl, some ruby, some <insert favorite web language here>...  We added CSS to style the content and JavaScript to liven it up.

Even with this shift, however, the files contained everything.  Even with web applications, REST, javascript SPAs, etc. we still have the same basic concept that the files contained everything.

So with web publishing, the artifacts created by the developer could be promoted through the environments just as non-web projects.  The classic publishing roles would overlay onto the environments, perhaps in a different order, but in each of the environments the different roles could effect change on the content and, on final approval, can be published on the production environment.

It's been like that for 20 years now, so long that it's practically ingrained in our enterprises that for web content creation/publishing we need to get all those environments lined up and ready for content promotion.

Enter The Liferay

So we come to Liferay with this expectation on how web content publishing works.  I mean, it's been this way for 20 years, so this is the only way it can and should work.  Right?

Well, wrong.  Liferay actually allows the restoration of the classic publishing roles.  Through existing facilities already baked into the Liferay core, a writer can author content.  Through the use of workflow, editors can approve and edit content.  Illustrators can upload images.  Using local or remote staging, production designers can deal with page layouts and design.  The printer doesn't generate physical media, but they can be responsible for publishing the staging content to the live site.

Great!  We've restored the classic publishing process, so everything is puppies and flowers!

Well, not really.  Liferay really sticks it to the old time web content developers and project managers because it practically eliminates the ability to promote content between environments.  Liferay really doesn't allow you to create content in development and promote it to test, UAT and eventually production.

This seems to blow everyone's minds.  How is a PM supposed to say "yes, the developer did the job correctly and the content is ready for promotion'?  How does a developer provide a software artifact to support marking the end of a phase of the SDLC?  We can't possibly do a real web project on Liferay without support for web content promotion...

The Trouble with LARs

It's usually this point where someone will say "Hey, I can export pages/content as a LAR file from one environment and then import it into another!  I can force Liferay to support web content promotion..."

That's really a problem.  Often it will work in a test case or for an initial short run, but LARs are really the round peg for a "web content promotion" square hole.  In order to get LARs to work in this process, you really end up shoving and pushing and hammering to try to get it to work.

Here's some finer points about LARs that often need to be mentioned when discussing using them in this fashion:

  • They are extremely fragile.  LAR imports can break if you even look at them funny.
  • Failures are reported like idiot lights on a car dashboard.  The errors logged during failures are quite obtuse.  They don't tell you what part of the LAR failed, they don't report what conflicts actually are, and they certainly don't suggest how you fix the problem.
  • LAR imports terminate on first exception.  If your LAR actually has five potential import exceptions, you're not going to know it until you complete a few rounds of export/import/analyze to work through all the issues individually.
  • LARs are tightly coupled to the version of Liferay.  LARs are stamped with the version of Liferay the data was exported from and will not import into a different Liferay version.  And this is not a major version restriction (i.e. from 6.1 to 6.2), this is an exact version match on specific GA or SP release.
  • LARs can be tightly coupled to installed plugins.  Depending upon options selected during LAR export, the LAR file can actually contain data related to deployed plugins; if the plugins are not available on the target system, the import will fail.  For example, exporting a page that has a calendar instance, the LAR will contain info about this; if the target Liferay does not have the calendar plugin installed, the LAR import will fail.  One might expect the page to load but exclude the calendar, but it is treated as an entire failure.
  • LARs do not maintain asset IDs during import.  LARs actually do their work on asset names.  This allows the LAR import to work by reassigning a valid surrogate ID in the system where the assets are imported.  However, if any of your code refers to an asset by ID that code will likely fail in other environments as surrogate ID values will not be guaranteed to match.  So if you use a simple URL to embed an image in web content using the ID, it will need to be adjusted in the target system to get it to work.  These kinds of things can be tricky because there is no failure at LAR import time, it really will only be noticed by someone viewing the imported LAR content in the site.
  • It is super easy to export a LAR that cannot be imported.  This is really frustrating and happens a lot with structures.  When you're exporting a journal article that uses a structure it is normal to include the structure in the LAR.  But after you do this once, the structure would have already been loaded into the target environment (maybe).  So you should stop including the structure in the LAR as this allows the import to work until, of course, you try to import into an environment that doesn't have the structure loaded...
  • LAR created/modified users and times.  This is valuable meta information, but if I give you a LAR that has web content that I created, it won't keep that information in your system unless I'm a user in your environment.
  • Big LAR files fail to load for mysterious reasons.  Often they are due to some of the previously raised points, but other times they just outright fail and don't give you a reason.

And seriously, the list of issues goes on, but I think you get the point.

Long story short, the idea that LARs can be used to support a long term process of promoting web content between environments is really doomed to failure.  Those of you who are using LARs to support web content promotion, well I'm sure you've already seen some of these issues and, if you haven't yet, well then I'd say so far you have been lucky but predict that luck will run out.

LARs were designed for a specific use case, but web content promotion between environments was not one of them.

Restoring Sanity

No, LARs are not broken, so don't go opening bugs on Liferay.com to get web content promotion working.

What's broken is the view that web content creation must be treated as a development activity.

It's not, and everyone really should be happy about that.  Developers are not web content creators and project managers are not editors nor designers and web content creation is not a project.

So restore sanity in your Liferay installation.  Get the PMs and developers out of the web content creation business.

Set up a workflow process to allow knowledgable folks to review and approve content.  Set up staging (either local or remote) to give folks time to lay things out and have them reviewed and approved.  Manage all of the normal activities that a publishing process requires.

But do all of these things in production.  Don't try to push down to the lower levels of a SDLC setup, and don't try to force web content creation through the promotion process as though it's a software development artifact.

Liferay spent a lot of time and effort bringing these web content creation and publication tools into Liferay so the content creation process could be freed from the grips of the IT department.

Take advantage of this freedom.  It is yours...

 

Will the Real Font Awesome Please Step Forward...

General Blogs May 29, 2015 By David H Nebinger

Introduction

In my last blog post I had a little rant about the old version of Font Awesome included with Liferay 6.2 and how it would always be out of date because they seem to keep adding new glyphs every day.

So I wondered if it was possible to use the current Font Awesome in a theme and started down a path to test it out.

Font Awesome

For those that don't know, Font Awesome is basically a custom font made from SVG images.  It's awesome in that SVG images scale to practically any size yet still keep nice curves and won't pixelate on you.

By using Font Awesome, you get a catalog of canned images for all kinds of standard buttons that you see on a desktop application or on a web page plus a long list of others.

In Liferay 6.2 they added Font Awesome 3.2.1 into the mix.  At this point in time it is two years old and, although it still works, the set of available glyphs (361 of them) is smaller than the current 4.3.0's 519 glyphs.

Now I can't really comment on how useful the 519 glyphs are.  I really doubt I'll ever have a need for the new viacoin glyph, for example.  But I do want to know if I need it if I can use it or not.

Creating the Theme

Start by creating a new theme project for Liferay 6.2.  To keep the theme simple, use classic as the parent for the theme.

Download and expand the latest version of Font Awesome; I'm using 4.3.0.

From your Font Awesome directory (mine is font-awesome-4.3.0), copy the css and fonts folder to your theme.  If using Maven, you'll copy them to the src/main/webapp directory of your font.  If using the SDK, you'll copy them to the docroot/_diffs folder.

To keep the theme changes as simple as possible, copy the css/main.css from the _styled theme into your new theme's css folder.  Edit this file to add the font-awesome.css line to the end of the file:

@import url(font-awesome.css);

Once you've tested your theme out, you might want to switch over to font-awesome.min.css, but Liferay will already be minimizing the css on the fly so this is not really necessary.

At this point your theme project is done.  Build and deploy it to prepare for the next step.

Testing Font Awesome

So testing is actually pretty easy.  Create a new page in your portal somewhere (doesn't matter where) and use your new theme for the page and, to keep things really simple, use a 1 column layout.

Drop a Web Content Display portlet on the page and create a new web content article.  For the body of the article, click the "Source" button and set it to the following:

<div><i class="fa fa-3x fa-coffee"></i></div>

Publish the article to see your FA Coffee Cup:

Drop another Web Content Display portlet on the page and create a new article.  For the body of the article, click on the "Source" button and then grab the HTML fragment from the attached file (there's over 500 glyphs so I didn't want to put all of that content here into the blog) and past it in.

And before you say anything, yes I know it's a table and we don't do those anymore because they are not responsive.  I went with a table because this is just to show that Font Awesome is working in the theme, it's not trying to show the best way to put a table into your journal articles.

When you publish the article and return to your portal, voila, you can see all of the available Font Awesome 4.3.0 glyphs!

Conclusion

So it is possible to use the latest Font Awesome in your themes, leveraging all of the latest glyphs.

What's more, FA has some other similar glyphs as fonts from FontIcons.com.  It just so happens that those, too, can be integrated into a theme in the same way we just added Font Awesome into a theme.  Note that I'm not really selling FontIcons or anything, I'm just providing choice and affirmation that, should you need/want FontIcons, they'll work in your theme too.

So at one point I also wondered if it would be possible to upgrade Alloy's older Font Awesome 3.2.1 to the latest 4.3.0.  The simple answer is no.  For the Font Awesome 4 release, the classes changed from "icon-coffee" to a better namespaced "fa-coffee".  Some of the supported stylings also changed.  So Alloy's use of Font Awesome 3 cannot easily be adapted to use Font Awesome 4.

That said, it cannot be ruled out entirely.  I'm sure if you had enough time and energy, you could either rework all of the Font Awesome 4 class names to match the Font Awesome 3 names and then replace Alloy's older FA files with the hacked versions, but what an ugly hack that is.  A better path but one I haven't tried would be to override how Alloy pulls in and uses the FA 3 files and have it pull in and use the FA 4 files instead (I think all it would take is tweaking the Font Awesome 3 stuff pulled in via aui.css from styles that have _styled as a parent (or grandparent or great-grandparent, etc.) and have it pull in the FA 4 stuff instead).

 

Showing 41 - 60 of 78 results.
Items 20
of 4