Hot Deploy vs Auto Deploy

Company Blogs March 6, 2014 By Michael C. Han Staff

There has been a tremendous lack of understanding around what Liferay means by "hot deploy" and "auto deploy". Most people get the two concepts confused, believing them to be one and the same. In reality, Liferay has TWO completely separate and different concepts: hot and auto deploy (i.e. Hot Deploy != Auto Deploy).


Hot Deploy

Many of you are familiar with hot deployment in the context of JEE application servers. This basically means if an application artifact (WAR or EAR) file is present in a specifically configured directory, the application server (e.g. Tomcat, Weblogic, etc) will pick up that artifact, deploy it within the application server, and start the application. If you are not familiar with this concept, please consult your application server’s documentation.

This model works really well for development purposes; you don't have to wait 5 minutes for server restart (Weblogic/Websphere) to test changes to your code. This model also works for single node production deployments. I’m personally not a fan of this since many applications do not handle start/stops and classloader destruction properly, but that’s a different topic.

This model completely breaks down when you deploy to a multi-node production deployment. In a multi-node environment, you have many more constraints to deal with:

  • Ensuring application archive is available to all nodes

  • Ensuring the application deploys successfully across all nodes (ideally simultaneously)

Most application servers solve this by using a master/slave type of design: an admin server with multiple managed servers. When you “hot deploy”, you use the admin server’s user interface (or via vendors console tools like wsadmin, wladmin) to add the archive, select which managed servers should deploy it, and start the application. Application server vendors often have different names and tools for this:

  1. JBoss “domain” mode

  2. Weblogic “production” mode

  3. Websphere deployment manager

  4. Tomcat FarmWarDeployer

This is something that resides completely outside of Liferay Portal and is strictly in the application server’s realm.


Liferay piggy backs off this capability and performs additional initialization after a given application has been started (e.g. via javax.servlet.ServletContextListener mechanisms). This is really more “application lifecycle” and inter-application dependency management as opposed to “hot deploy”. When Liferay moves fully to OSGi in 7.0, we will more cleanly separate “hot deploy” and “application lifecycle concepts.


There are some specific Liferay capabilities that won't work unless your application server has hot deploy capabilities. Specifically, the "hot deployment" of custom-jsps in hooks. Liferay's JSP hook override very much depends upon the application server's ability to have:

  1. Deployment based on an exploded WAR (specifically the portal's WAR)
  2. Load changes to JSP files at runtime.

Application servers running in "production", "domain", etc modes cannot support this because in those deployment models, the servers often don't use exploded WARs and since no exploded WAR's they also do not support JSP reload/recompile. Even for Tomcat, it's generally advisable to deactivate JSP reloading for production deployments.


So what do you do if you use hooks to override Liferay JSPs AND you must use non-exploded WAR deployments? Simple: inject a pre-processing stage as part of your build process. You can deploy the hooks, allow them to make changes to the portal WAR file. Then you can rebundle the portal's WAR file and deploy it using the application server's deployment tools. Of course, you'll still need to deploy your hook as well, but you no longer need to worry about the JSP overrides not being loaded by your application server.


Auto Deploy

The Liferay Auto Deploy feature is a mostly optional feature that works in conjunction with the hot deployment capabilities of the application server.


So what does the auto deploy feature actually do? It:

  1. Picks up a Liferay recognized archive (e.g. *-portlet.*, *-theme.*, *-web.*, *.lpkg, etc)

  2. Inject required libraries (e.g. util-java.jar, util-taglib.jar, etc)

  3. Inject dependent jars (specified in

  4. Inject required taglib descriptors (e.g. liferay-theme.tld)

  5. Inject required deployment descriptors (e.g. app server specific descriptors, etc)

  6. Inject Liferay specific deployment descriptors if missing (e.g. liferay-portlet.xml, etc)


It’s a huge timesaver during development and it makes it a lot easier for a portlet developer to not have to learn all of Liferay’s deployment descriptors. However, this feature is completely incompatible with how many application servers work in their farm or multi-node modes.



So how do you get Liferay to work if you intend to configure your application server in those modes? Very simple: Don’t use the auto deployer at runtime, use it at build time.


Liferay’s Plugins SDK allows you to preprocess your archives and inject all the required elements and thus bypassing the auto deployer at runtime.  You simply need to call the ant task: direct-deploy (e.g. ant direct-deploy).


The direct-deploy task will create an exploded WAR file from which you can easily create a WAR file. The location of the exploded WAR depends upon what you have configured for the application server’s deploy directory in your Plugins SDK’s


If you choose to not use the Liferay Plugins SDK, then you can examine the build-common.xml in the SDK to see how Liferay invokes the deployer tools.


Hopefully, this will clear up some of the confusion out there about “hot” and “auto” deploy.


Liferay Spring Contexts

Company Blogs September 13, 2011 By Michael C. Han Staff

It's a somewhat rare occassion that I get to fuss around with code these days.  This time, it happens to be in assistance of one of our Global Services consultants for his West Coast Symposium workshop.

As many of you know, Liferay's Service Builder allows you to completely avoid worrying about the persistence layer, handing your own query and entity caching, etc.  It really cuts down on the amount of tedious development associated with JEE applications.  However, using Spring MVC together with Service Builder has been problematic as combining the two leads to two root Spring ApplicationContexts:

  • one for the ServiceBuilder generated services
  • a second for the Spring MVC beans.  

Of course, Spring doesn't like it when you attempt to have multiple root application contexts within the same servlet context.  To use both ServiceBuilder services and Spring MVC, developers would take the strategy of packaging the services in one WAR and the Spring MVC portlets in a second WAR.

I won't bore you with the ins and outs of ClassLoader hierarchies, aggregate classloaders, why we have multiple spring contexts, and etc.  Suffice it to say, for 6.1, you will be able to package Spring MVC with ServiceBuilder services within the same plugin war.

You will need to do the following:

  1. Use the Spring ContextLoaderListener to your web.xml to  initialize Spring MVC related config files.  (e.g. WEB-INF/applicationContext.xml)
  2. Place all custom spring configs required by ServiceBuilder services in something like:


This will ensure services related beans are loaded in the ServiceBuilder ApplicationContext and Spring MVC related beans are loaded in the Spring MVC ApplicationContext.

There are some constraints/limitations:

  1. You cannot inject any depencies from the Spring MVC context into beans held in the ServiceBuilder context.  However, this should never happen anyway since the two architectural layers should be kept separate.
  2. You cannot inject services (e.g. MyLocalService) into the Spring MVC beans.  Instead, in your Spring MVC beans, you must call MyLocalServiceUtil.doXYZ(...)

(2) may cause some consternation for some IoC practitioners since this breaks the rule that all dependencies should be injected.  However, you can easily overcome this inconvenience by injecting a mock object into the MyLocalServiceUtil as part of your unit tests.

How debuggers saved me grey hairs...

Company Blogs October 29, 2010 By Michael C. Han Staff

As much as Brian wants to stick with System.outs to do debugging, I would go with a proper debugger any day.  Each time I use the IntelliJ debugger, I thank the engineers @ IDEA for putting a fantastically useful debugger into the IDE.


A few days ago, there was a very wierd problem where I had quickly find why a particular data element was causing a Liferay LAR file to fail to import.  The LAR had roughly 200 pages, 500 web content articles, and 200 documents.  Imagine trying to step through that many elements one at a time...

Conditional breakpoints to the rescue!

In IntelliJ, you can configure when a break point gets triggered.  For instance, I was debugging the LayoutImporter in trunk at line 778:


     Node parentLayoutNode = rootElement.selectSingleNode("./layouts/layout[@layout-id='" + parentLayoutId + "']");


With 200 different layouts, I would be clicking "proceed" a lot to find the layout I really wanted.  Instead, I define the following condition for activating the breakpoint:

  (parentLayoutId == 203990)



 By using this conditional break point, the IDE would only halt if the parentLayoutId equals 203990 and not for the other 199 layouts.  

This condition expression can contain any number of boolean expressions so this is an extremely powerful capability.


I'm sure you can do the same in Eclipse but I just don't know how.

LDAP Enhancements

Company Blogs March 25, 2010 By Michael C. Han Staff

With the upcoming 6.0 release, you will see many product engineers announce cool new features.  Unfortunately, LDAP integration is neither cool nor new.  However, in 6.0, we have improved the capabilities of our LDAP integration in several areas:

1. You can synchronize user custom attributes between Liferay and LDAP

No longer are you limited to the columns in the User_ table, now you can configure attributes like your favorite color between LDAP and Liferay.  This can be done by simply creating the appropriate custom attributes for a User in Liferay's control panel and then configuring the properties "ldap.user.custom.mappings" and/or "" in your


2. In 5.1 and 5.2 EE, we implemented LDAP pagination via PageResultsControls .  We now make this solution available to the community in 6.0.

3. You can configure the portal to create a role for each LDAP group.

Prior to 6.0, the portal synchronized LDAP groups as User Groups and you had to manually associate the user group to roles.  In 6.0, the portal will create the user group, then create a role with the same name as the user group, and then associate the role to the user group.  This capability is deactivated by default.  However, you can activated it by changing "" to true in

4. You may override LDAP import and export processes via Spring


For those who are IOC fans, you were probably frustrated by the inability to customize the import and export process (they were static methods in PortalLDAPUtil or buried in LDAPUser).  In 6.0, we changed LDAP to provide proper interfaces at all levels of the LDAP integration process:

  • Don't like how Liferay converts LDAP attributes to a Liferay user?  You may implement your own LDAPToPortalConverter in the EXT and change a Spring configuration to inject your own implementation.  
  • Don't like how Liferay converts a Liferay user to LDAP attributes?  You may implement your own PortalToLDAPConverter.  
  • Need to change the export process?  Implement a PortalLDAPExporter.
  • Need to change the import process?  Implement a PortalLDAPImporter


Simplifying Community Provisioning

Company Blogs May 7, 2009 By Michael C. Han Staff

A common story we hear from the community and our customers is that Liferay has a lot of powerful features.  However, it is difficult for those who want to just create a community and start collaborating.  They have to first create layouts, then add portlets, and then update portlets.


This is one area that some of our competitors have done quite well.  They make it quite simple for someone to quickly create a collaborative community/site and start using it.  Although the new Social Office addresses many of these short comings by making it extremely simple to create collaborative sites, we want to bring this simplicity to Liferay Portal as well.


We have decided to tackle this in two phases.  The first phase will be an approach that we have taken for several customers and is backwards compatible for 5.1 EE.  This solution leverages the existing features like model listeners and Liferay's Live Staging capabilities to provide community templates. 


You may follow the following steps to create your own community template:

1.     Create a PRIVATE community with the name DEFAULT_TEMPLATE

2.     Activate staging for this community by:

a.     Click on "Manage Pages" for the DEFAULT_TEMPLATE

b.     Click on the “Settings Tab”

c.     Check the “Activate Staging”


3.     With the staging environment activated, add public and/or private pages and then configure the pages with portlets as necessary

a.     You can add portlets to those pages simply by clicking “View Pages”

b.     Note the red background indicates you are in the staging environment




Now you have created your first community template.  The next community you create will have the same layouts and portlet as the DEFAULT_TEMPLATE.  Changing the layouts in the template community will only impact new communities.  Existing communities will not receive the changes.


If you wish to have different templates for open, restricted, and private communities, you easily do so as well.  You would need to repeat the above to create the OPEN_TEMPLATE, PRIVATE_TEMPLATE, and RESTRICTED_TEMPLATE communities.  The rules are as follows:

1.     If community is open, templates pages from OPEN_TEMPLATE will be used

2.     If community is restricted, template pages from RESTRICTED_TEMPLATE will be used

3.     If community is private, template pages from PRIVATE_TEMPLATE will be used

4.     If the template for the associated community type is not found, the DEFAULT_TEMPLATE will be used.

5.     If there are no templates, then nothing is done.


In the second phase, we will instroduce a new template builder feature in 5.3 (Q3 2009).  This feature requires changes that prevents us from making it available in 5.1 and 5.2.




Portal Performance Tuning Part 2

Company Blogs May 6, 2009 By Michael C. Han Staff

After many months of testing, tuning, and more testing, we have finally completed the tuning and benchmarking of the portal.  We finished the tunings for 5.1 Enterprise over a month ago and are in the final processes of tuning for 5.2 Enterprise.

Based on feedback from some of our enterprise subscription customers, we pushed the portal with pretty large datasets, like 1 million users, 10,000 communities, over 1 million forum posts.  This helped us identify some index and query issues.

After tuning, our performance saw pretty dramatic improvements.  For instance, prior to tuning, the portal’s login throughput was roughly 30 logins/second on a single server (75% utilization).  After tuning, the login through put was 76 logins/second, mean transaction times of 160ms, and 80% CPU utilization on a single physical server.

We saw similar gains across the board in Liferay’s collaboration tools (Message Boards, Wiki, and Blogs).

The performance enhancements to the portal will be available to the community in our upcoming 5.3 release (Q3 2009).  These enhancements have already been deposited into the trunk so you can get access to them even before the official GA.

These performance enhancements are available to our 5.1 EE customers in the next 5.1 EE release(within a couple weeks).  For those currently using 5.1 SE you may contact our sales folks to get more information on obtaining an EE subscription.


Portal Performance Benchmarking

Company Blogs September 18, 2008 By Michael C. Han Staff

Late last year, we started the process of upgrading our performance testing environments.  Our previous performance environment were basically desktops with limited CPU and memory.  The latest performance environment resembles what we see many of our clients use.  Our configuration is as follows:

1) Web Server: 1 CPU quad core 2.4 Ghz Intel Xeon 2GB memory

2) Application server: Dual cpu Intel Xeon quad core 2.4 Ghz 8GB memory

3) Database server: Dual CPU Intel Xeon quad core 2.4Ghz 4GB memory and 250GB SCSI 15k RPM hard drive.

We utilize sanitized data from our production to generate our test scenarios.  This dedicated environment allows us to provide realistic benchmarks for Liferay Portal and use this environment to find bottlenecks and improve performance. 

As part of our ongoing performance tuning, we are working with Sun to create SLAMD test scenarios for the out of box features the portal, including CMS, blogs, wikis, forums, etc.  We already use Hudson to perform automated builds for us.  Once SLAMD scenarios are in place, we will integrate these tests into the Liferay portal build process and detect performance variances.  Alejandro Medrano from Sun has been working with us to get SLAMD scenarios in place as quickly as possible.

At the end of initial performance study, we will be able to provide benchmarks and guidance for a series of reference deployment architectures, from small single servers deployments to large HA deployments.

Observations on Chinese business

Company Blogs June 30, 2008 By Michael C. Han Staff

Well, I'm back in our offices in Asia to help our sales folks and to sign some partnerships with local system integrators.  This has been a rather interesting trip, especially learning some of the different ways Chinese businesses operate and some of their rather unique requirements.

Some of my observations:

1) During the sales cycle, Chinese customers tend to ask for far more upfront in the POC than we normally see in the US and Europe.  One of our potential SI partners is working on a deal with a Chinese mobile carrier.  They have been developing a POC for 1 man year, all unpaid.

2) Chinese executives at state owned enterprise expect longer shelf life for software systems.  The heads of state owned enterprises change every 8 years or so.  Thus, their expectation is that if they put a piece of software in service during their term, it will last for their term and several years beyond.  I think most European and US executives would agree (be it grudgingly) that investments in software systems really do not live more than 3-5 years without continuous technology updates.

3) Many enterprises cannot efficiently pass information between business units.  This is a recurring theme among those I spoke with.  Just about every company in the world struggle with this type of issue, but I must say I am rather astounded by the magnitude of the issue.  Chinese companies are organized in the fashion of a holding company with many smaller limited companies (e.g. Liferay INC holds Liferay Dalian Software Ltd, Liferay Beijing Ltd (does not exist yet), Liferay Shanghai Ltd (does not exist yet).  Each LTD company often has their own systems for finance, inventory control, and etc.  It is not uncommon to find all major database vendors (and some no longer in existence) utilized in different LTD companies.  I was told one story where one holding company held LTD companies using dBase IV, DB2, Oracle, MS-SQL, Informix, Access, and Sybase.  Each LTD company within this firm held 4-5 other LTD companies, each again with their own systems.  I shudder to imagine the potential errors when reporting utilization, revenues, or any statistics.  Overall, it's a problem that would make an enterprise architect run screaming for the door.

more later...

Showing 8 results.
Items 20
of 1