Getting Started with Building Liferay from Source

Company Blogs May 18, 2017 By Minhchau Dang Staff

When a new intern onboards in the LAX office, their first step is to build Liferay from source. As a side-effect of that, usually the person that handles the intern onboarding will see that the reported ant all time is in hours rather than in minutes, and then start asking people, "Is it normal for it take this long to build Liferay for the first time?"

It happens frequently enough that sometimes my hyperactive imagination supposes that a UI intern's entire first day must be spent cloning the Liferay GitHub repository and building it from source, and then going home.

In hindsight, this must be what the experience is like for a developer in the community looking to contribute back to Liferay as well. Though, rather than go home in a literal sense, they might stare at the source code they've downloaded and built (assuming they got that far) and think, "Wow, if it took this long just to get started, it must be really terrible to try to do more than that," and choose to go home in a less literal way, but with more dramatic flair.

One of the many community-related discussions we have been having internally is how we can make things better, both internally and externally, when it comes to working with Liferay at the source code level. Questions like, "How do we make it easier to compile Liferay?" or "How do we make it easier to debug Liferay?" After all, just how open source are you if it's an uphill battle to compile from source in order find out if things are already fixed in branch?

We don't have great answers to these problems yet, but I believe that we can, at a minimum, provide a little more transparency about what we are trying internally to make things better for ourselves. Sharing that information might give all of us a better path forward, because if nothing else, it lets us ask important questions about the pain points rather than bikeshed color ones.

Step 1: Upgrade Your Build System

Let's say I were to create a survey asking the question, "Which of the following numbers best describes your build time on master in minutes (please round up)?" and gave people a list of options, ranging from 5 minutes and going all the way up to two hours.

This question makes the unstated assumption that you are able to successfully build master from source, because none of the options is, "I can't build master from source." Granted, it may seem strange that I call that an "assumption", because why would you not be able to build an open source product from source?

Trick question.

If you've seen me at past Liferay North America Symposiums, and if you were really knowledgeable about Dell computers in the way that many people are knowledgeable about shoes or cars, you'd know that I've been sporting a Dell Latitude E6510 for a very long time.

It's a nice machine, sporting a mighty 8 GB of RAM. Since memory is one of the more common bottlenecks, this made it at least on-par with some of the machines I saw developers using when I visited Liferay clients as a consultant. However, to be completely honest, a machine with those specifications has no hope of building the current master branch of Liferay without intimate knowledge of Liferay build process internals. Whenever I attempted to build master from source without customizing the build process, my computer was guaranteed to spontaneously reboot itself in the middle.

So why was this not really a problem for other Liferay developers?

Liferay has a policy of asking its developers to accept upgrades to their hardware once every two to three years. The idea is that if new hardware increases your productivity, it's such a low cost investment that it's always worthwhile to make. A handful of people resist upgrades (inertia, emotional attachment to Home and End keys, etc.), but since almost everyone chooses to upgrade, Liferay has an ivory tower problem, where much of Liferay has no idea what it's like to even start up Liferay on an older machine, not to even discuss what it's like to compile Liferay on those older machines.

  • Liferay tries to do parallel builds, which consumes a lot of memory. To successfully build Liferay from source, a dedicated build system needs 8 GB of memory, while a developer machine with an IDE running needs at least 16 GB of memory.
  • Liferay writes an additional X GB every time you build, a lot of it being just copies of JARs and node_modules folders. While it will succeed on platter drives, if you care about build time, you'll want Liferay source code to live on a solid-state drive to handle the mass file creation.

So eventually, I ran into a separate problem which required a computer upgrade, I needed to run a virtual machine that itself wanted 4 GB, and so that combined with running Liferay alongside an IDE meant my machine wasn't up to handling my task. After upgrading, the experience of building Liferay is substantially different from how it used to be. While I have other problems like an oversensitive mousepad, building Liferay is no longer something that made me wonder what else could possibly go wrong.

If you weren't planning on upgrading your computer in the near future, it doesn't make sense to upgrade your computer just to build Liferay. Instead, consider spinning up a virtual machine in a cloud computing environment that has the minimum requirements, such as something in your company's internal cloud infrastructure, or even a spot instance on Amazon EC2. Then you can use those servers to perform the build and you can download the result to your local computer.

Step 2: Clone Central Repository

So let's assume you've got a computer or virtual machine satisfying the requirements listed above. The next step is to get the source code so you can use this machine to build Liferay from source.

The first step that interns get hung up on is cloning the Liferay repository. If you've ever tried to do that, you'll find that Liferay has violated one of the best practices of version control and we've committed a large folder full of binary files, .gradle. As a result of having this massive folder, GitHub sends us angry emails and, of course, cloning our repository takes hours.

How does Liferay make this better internally? Well, in the LAX office, the usual answer is to plug in the ethernet cable. Liferay invested heavily in fast internet, and so simply plugging in the ethernet cable makes the multi-hour process finish in 30 minutes.

However, it turns out that there is actually a better answer, even in the LAX office. Each office has a mirror that holds archives of various GitHub repositories, including liferay/liferay-portal. We suspect the original being mirrored is maintained by Quality Assurance, because we have heard that keeping all of our thousands of automated testing servers in sync used to result in angry emails from GitHub. Since it's an internal mirror, this means that downloading X GB and unzipping it takes a few minutes, even over WiFi, and it's on the order of seconds if you plug in your ethernet cable.

So, in order to improve our internal processes, we've been trying to get the people who manage our new hires and new interns to recognize that such a mirror exists and to use it during their onboarding process to save a lot of time for new hires on their first day.

So what does this mean for you?

Essentially, if you plan to clone the code directly onto your computer for simplicity, you'll need to make sure that it's during a time where you won't shut down the computer for a few hours and when you don't need it for anything (maybe run it overnight), because it's a time-consuming process.

Alternately, have a remote server perform the clone, and then download an archive of the .git folder to your local computer, similar to what Liferay is trying to do internally. This will free up your machine to do useful things, and even spinning up Amazon EC2 spot instances (like an m1.small) and bringing things down with either SCP or an S3 bucket as an intermediate point may be beneficial.

Step 3: Build Central Repository

The next step is your first build from source. This is done with a single command that theoretically handles everything. However, before you run this single command, you might need to do things to reduce the number of resources it consumes.

  • Liferay issues a lot of requests to the NPM registry in parallel builds. You can cap this by checking build.properties for nodejs.npm.args, and taking the commented out line and adding it to your own build.USERNAME.properties.
  • Liferay includes a lot of extra things most people never need. You can remove these by checking build.properties for build.include.dirs and using its commented out value in your build.USERNAME.properties, or adjusting it for your needs if you want more than what it tries by default.
  • If you're on Windows, disable Windows Defender (or at least disable it on specific folders or drives). The ongoing scan drastically slows down Liferay builds.

After you've thought through all of the above, you're ready for the command itself, which requires that you download and install Apache Ant, and after knowing that this is what I'm asking you to download, you might also realize that this means that the entry point for everything is build.xml.

ant all

So now you've built the latest Liferay source code, right?

Another trick question!

What's in the master branch of liferay-portal is actually not the latest code. Liferay has started moving things into subrepositories, which you can see from the hundreds of strangely named repositories that have popped up under the Liferay GitHub account.

However, a lot of these repositories are just placeholders. These placeholders are in what's called "push" mode, where code from the liferay-portal repository is pushed to the subrepository. However, a handful of them (five at the time of this writing) are actually active where they're in what's called "pull" mode, where code is pulled from the subrepository into the liferay-portal repository on-demand. You know the difference by looking at the .gitrepo file in each subrepository and checking the line describing the mode.

However, because all of those files are actually also on the central repository, after you've cloned the liferay-portal repository, you can use the files there to find out which subrepositories are active with git, grep, and xargs magic run from the root of the repository.

git ls-files modules | grep -F .gitrepo | xargs grep -Fl 'mode = pull' | xargs grep -h 'remote = ' | cut -d'=' -f 2

I will dive into more detail on the subrepositories in a later entry when we talk about submitting fixes, but for now, they're not relevant to getting off the ground running other than an awareness that they exist, and an awareness that additional wrinkles exist in the fix submission process as a side-effect of their existence.

Step 4: Choose an IDE

At this point, you've built Liferay, and the next thing you might want to do is point an IDE with a debugger to the artifact you've built, so that you can see what it's doing after you start it up. However, if you point an IDE to the Liferay source code in order to load the source files, whether it's Netbeans, Eclipse, or IntelliJ, you'll notice that while Liferay has a lot of default configurations populated, but these default files are missing about 90% of Liferay's source folders.

If you're using Netbeans, given the people who have forked the Liferay Source Netbeans Project Builder overlap exactly with the team I know for sure uses Netbeans, this tool will help you hit the ground running. Since there are recent commits to the repository, I have confidence that the Netbeans users team actively maintains it, though I can't say with equal confidence how they'll react to the news that I'm telling other people about it.

If you're using Eclipse or Liferay IDE, then Jorge Diaz has you covered with his generate-modules-classpath script, which he has blogged about in the past, and his blog post explains its capabilities much clearly than I would be able to in a mini-section at the end of this getting started guide.

If you're using IntelliJ IDEA Ultimate, you can take advantage of the liferay-intellij project and leave any suggestions. It was originally written as a streams tutorial for Java 7 developers rather than as a tool, and I still try to keep it as a streams tutorial even as I make improvements to it, but I'm open to any improvement ideas that make people's lives easier in interacting with Liferay core code.

Step 5: Bend Liferay to Your Will

So now that everything is setup for your first build, and you're able to at least attach a debugger to Liferay, the next thing is to explain what you can do with this newly-discovered power.

However, that's going to require walking through a non-toy example for it to make sense, so I'll do that in my next post so that this one can stay as a "Getting Started" guide.

Pre-Compiling Liferay JSPs Take 2

Company Blogs January 9, 2013 By Minhchau Dang Staff

Many years ago, Liferay's build validation was only getting started and so JSP pre-compilation related slowness wasn't a big deal (I think it was really only used during the distribution process). However, I wanted to make it faster for a client-related project (where we couldn't really do a server warm-up after deployment) and the research culminated in a blog entry summarizing my findings.

Fast forward a few years later and I found out that Liferay QA was suffering from build slowness because Selenium tests have the same requirement of not being able to do a server warm-up before doing tests, and so pre-compilation is turned on. Thus came an improvement to the core build process in LPS-14215 based on the findings of that blog entry.

Fast forward a few more years and JSP pre-compilation is still a standard part of our distribution process. However, after an upgrade to Tomcat 7 for Liferay 6.1, we released milestones for community testing, and the community (as well as other core engineers) reported that everything was very slow the first time you loaded the portal. Poking around, we discovered that Tomcat was recompiling our pre-compiled JSPs when development mode was active.

In a nutshell, we found that any pre-compilation (including the example provided in the Tomcat 7 documentation) will provide zero performance benefit. This is because rather than assuming class files newer than JSP files were up-to-date, Tomcat used the exact file timestamp to determine that, making sure that when Jasper generated the class files, it updated the modified timestamp of the file to be the same as the JSP. This new feature allowed older files to replace newer ones, in case you rolled back a change to a JSP and re-copied it to the application server, but broke JSP pre-compilation.

Thus, we fixed our own tools with LPS-23032. The TimestampUpdater stand-alone program walks the entire directory and makes sure that the class files have the same timestamps as the java files, which (thanks to Jasper itself behaving properly) will have the same timestamps as the jsp files. All this together makes JSP pre-compilation effective again.

Which leads me to the point of this blog entry. Which is, the example script in the previous blog entry has been updated for Tomcat 7 and invokes Liferay's TimestampUpdater to fix file timestamps. This updated example is available in my document library (download here). See the README.txt contained in the example script for additional instructions on how to use it.

Note that this only works for Liferay 6.1 and beyond, because we didn't have to add the class in previous releases (our default bundle was Tomcat 6). If you need to deploy Liferay 6.0 or earlier against Tomcat 7 for whatever reason, you can extract the TimestampUpdater from the portal-impl.jar of Liferay 6.1 and put it in the portal-impl.jar for your version of Liferay.

Avoiding Accidental Automatic Upgrades

Company Blogs January 4, 2013 By Minhchau Dang Staff

Once upon a time, a long time Liferay user decided to upgrade from an old version of Liferay (Liferay 6.0 GA2) to a new version of Liferay (Liferay 6.1 GA2). They performed this upgrade by shutting down Liferay 6.0 running on an old VM, starting Liferay 6.1 on a new VM, and then doing some DNS magic and some load balancer magic to switch to using the new VM after the upgrade completed.

A day later, a routine Cron job restarted Liferay 6.0 on the old VM, which still pointed at the same database used by Liferay 6.1.

Liferay 6.0's upgrade code checked to see if it had to run any of its upgrade processes, found that the build number stored in the Release_ table (6101) was higher than any of the upgrade processes it wanted to run, and proceeded to update the Release_ table with its own version number (6006).

Naturally, because nobody was accessing that old version of the portal, it didn't matter. It didn't touch any other tables, so Liferay 6.1 continued to run fine. That is, until Liferay 6.1 had to be restarted a week later for some routine maintenance.

As Liferay restarted, Liferay 6.1's upgrade code checked to see if it had to run any of its upgrade processes, found that the build number stored in the Release_ table (6006) was lower than some of the upgrade processes it wanted to run, and so ran them it did. Of course, because the data was actually in the 6101 state, all of these upgrade processes wound up corrupting the data, rendering the whole Liferay instance unusable.

No problem, the client had followed industry best practices and regularly backed up their system. So, they restored the database from yesterday's backup, thinking that all would now be well.

Alas, it was not meant to be. The damage to the database had been done a week before, and so the bad data was already in the database. Every time the client restored the database from yesterday's backup , Liferay re-detected the version number was different and that it needed to re-upgrade from 6006 to 6101. As a result, Liferay obediently re-corrupted the database.

This raises the following question: how do we avoid the data corruption resulting from an older release running against a newer release's database?

Enter LPS-21250, which prevents Liferay from starting up if its release number is lower than what is seen in the Release_ table. This means that if this situation were to repeat with 6.1 GA2 and 6.2 M3, 6.1 GA2 would fail to start up and not update the Release_ table.

Of course, this doesn't fix the problem of earlier versions of Liferay (such as Liferay 6.0) updating the Release_ table, raising a second question: how do we handle the problem where earlier versions of Liferay (such as Liferay 6.0) might update the Release_ table?

In Liferay 6.0 and in Liferay 6.1 GA1, it is possible to prevent upgrade processes from running by setting the upgrade.processes property to blank. The Release_ table will still be updated, but at least Liferay wouldn't attempt to re-corrupt its own data. This strategy will work to prevent upgrade processes from running if you were to accidentally start a Liferay 5.x release against a Liferay 6.0 database or a Liferay 6.1 GA1 database.

	upgrade.processes=

In Liferay 6.1 GA2, a blank value for upgrade.processes has been given a new meaning while adding support for LPS-25637: automatically try to detect the upgrade processes that need to be run. So, this workaround no longer works.

So at this time, for Liferay 6.1 GA2 and possibly the Liferay 6.2 releases, you have two ways of handling the problem of a Liferay 5.x or a Liferay 6.0 running against your database. You can either create a no-op upgrade process, or you can prevent Liferay from starting up at all if it detects a version number change.

In the former case, there's no native UpgradeProcess in Liferay 6.1 which truly does nothing. So, you can either use an upgrade process that is very unlikely to run (such as the 4.4.2 to 5.0.0 upgrade process), or you'll have to write an EXT plugin which contains a custom do-nothing upgrade process and specify it in your portal properties, or you'll have to specify a bad value for upgrade.processes and get a stack trace on every startup.

	upgrade.processes=com.liferay.portal.upgrade.UpgradeProcess_5_0_0

In the latter case, LPS-25637 also introduced a new feature which skips all upgrade processes if there's a version number match (previous versions of Liferay would check each one individually). Leveraging that, you can make use of a built-in run-on-all-releases upgrade process that was originally intended to allow you to split an upgrade in chunks, and effectively does nothing more than shut down Liferay.

	upgrade.processes=com.liferay.portal.upgrade.UpgradePause

Updating Document Library Hooks

Company Blogs August 28, 2009 By Minhchau Dang Staff

A common problem that comes up is the need to change how documents are stored in the Document Library.

For example, you may start out storing documents using Jackrabbit hooked to a database. However, as time goes on and you find yourself using more Liferay deployments, the number of database connections reserved for Jackrabbit alone gets dangerously close to the maximum number of database connections supported by your database vender.

So, you want to switch from using JCRHook over to using the FileSystemHook to store documents on a SAN.

If your migration uses two different hooks and you have access to the portal class loader (for example, you're running in the EXT environment, or you have the ability to update the context class loader for your web application like in Tomcat 5.5), the solution is straightforward.

InputStream exportStream =
	sourceHook.getFileAsStream(
		companyId, folderId, fileName, versionNumber);

targetHook.updateFile(
	companyId, portletId, groupId, folderId, fileName,
	versionNumber, fileName, fileEntryId, properties, modifiedDate,
	tagsCategories, tagsEntries, exportStream);
 

In summary, you instantiate the hook objects, pull data from your source hook and push it to your target hook. Then, update your configurations and restart your server to use the new document library hook. (In case you're not sure how to change document library hooks, see the comments for the dl.hook.impl property in portal.properties.)

If you'd like to see how this is accomplished via code samples rather than via textual explanations, these document links might help (just to note, the sample plugins SDK portlets leverage the portal class loader in order to access the classes found in portal-impl.jar, and so you must use Tomcat 5.5 or another servlet container that supports a similar mechanism in order to use them):

However, it's not always so easy. Another common problem is where you start out using Jackrabbit hooked up to a file system (perhaps in the earlier iterations of Liferay where JCRHook was the default), but you want to move to a clustered environment and you do not have access to a SAN.

Therefore, you want to migrate over to using Jackrabbit hooked to a database.

This is different from the previous problem in that you're using the same exact hook in both cases, and the way Liferay handles Jackrabbit inside of this hook makes it so that you can only have one active Jackrabbit workspace configuration (specified in your portal.properties file), and so simply instantiating two different hooks is not possible.

The solution here is to run the migration twice.

First, with the original JCRHook configuration, export the data to an intermediate hook (for example, the Liferay FileSystemHook) and shut down your Liferay instance. Then, you update portal.properties and/or repository.xml to reflect the desired JCRHook configuration, and import from your intermediate hook back to JCRHook.

If you do not have access to the portal class loader, the migration is less straightforward, because you won't have access to the hook classes themselves.

InputStream exportStream =
	DLLocalServiceUtil.getFileAsStream(
		companyId, folderId, fileName, versionNumber);

FileUtil.write(intermediateFileLocation, exportStream);

InputStream importStream =
	new FileInputStream(intermediateFileLocation);

DLLocalServiceUtil.updateFile(
	companyId, portletId, groupId, folderId, fileName,
	versionNumber, fileName, fileEntryId, properties, modifiedDate,
	tagsCategories, tagsEntries, importStream);
 

In summary, you'll have to read the documents using DLLocalServiceUtil with the original configuration and save them to disk in a way where you can parse all the data needed to call updateFile (perhaps in the same way that mirrors the way FileSystemHook works). Then, you re-import those exported documents using DLLocalServiceUtil after updating your configuration and restarting the server.

Using the Dynamic Query API

Company Blogs August 17, 2009 By Minhchau Dang Staff

Assuming that indexes have already been created against the fields you're querying against, the Dynamic Query API is a great way to create custom queries against Liferay objects without having to create custom finders and services.

However, there are some "gotchas" that I've bumped into when using them, and I'm hoping that sharing these experiences will help someone out there if they're banging their head against a wall.

One gotcha was getting empty results, even though the equivalent SQL query was definitely returning something.

By default, the dynamic query API uses the current thread's class loader rather than the portal class loader. Because the Impl classes can only be found in Liferay's class loader, when you try to utilize the Dynamic Query API for plugins that are located in a different class loader, Hibernate silently returns nothing instead.

A solution is to pass in the portal class loader when you're initializing your dynamic query object, and Hibernate will know to use the portal class loader when looking for classes.

DynamicQuery query =
	DynamicQueryFactoryUtil.forClass(
		UserGroupRole.class, PortalClassLoaderUtil.getClassLoader());
 

A second gotcha was figuring out how to use SQL projections (more commonly known as the stuff you put in the SELECT clause), the most common cases being to select a specific column or to get a row count.

The gotcha was that the sample documentation on the Liferay Wiki uses Hibernate's DetachedCriteria classes, and trying to add Projections.rowCount() to a DynamicQuery object gave me a compile error, because the Liferay DynamicQuery object requires using Liferay's version of the Projection class, rather than the Hibernate version.

To resolve this gotcha, you can use ProjectionFactoryUtil to get the appropriate object.

query.setProjection(ProjectionFactoryUtil.rowCount());
 

A third gotcha was a "could not resolve property" error when I tried to add a restriction via RestrictionsFactoryUtil. Even though it looked like the bean class definitely had that attribute defined in portal-hbm.xml, Hibernate wasn't able to figure out what property needed to be used.

The gotcha is that some of Liferay's objects use composite keys. When using composite keys with Hibernate's detached criteria and Liferay's dynamic queries, the name of the property must include the name of the composite key. In the case of the Hibernate definitions created by Liferay's Service Builder, composite keys are always named primaryKey.

Therefore, the solution is to use primaryKey.userId instead of userId.

Criterion criterion =
	RestrictionsFactoryUtil.eq("primaryKey.userId", userId);
 

A fourth gotcha is that even if you don't specify a projection (thus resulting in a default of selecting all columns and implied "give me the entity"), casting directly to a List<T> won't work as it does in custom finders, because you're getting back a List<Object>, not a List.

The quick and dirty solution is to either (a) use addAll on a List and typecast (simulating what happens in a custom finder), or (b) add each result to a List<T> (cleaner to read).

Adding a Plugins Portlet to the Control Panel

Company Blogs December 30, 2008 By Minhchau Dang Staff

A "Gotcha!" situation came up today when I attempted to add a portlet developed in the plugins environment to the Control Panel (which is a new feature being developed for 5.2.0 that you can read about in Jorge's blog entry).

In a nutshell, the Control Panel provides a centralized administrative interface for the entire portal. Unlike a standard layout where you 'manage pages' and put portlets anywhere, whether or not a portlet shows up inside of the Control Panel depends on whether or not you've set the following nodes in your liferay-portlet.xml:

  • control-panel-entry-category: The 'category' where your portlet will appear. There are currently 4 valid values for this element: 'my', 'content', 'portal', and 'server'.
  • control-panel-entry-weight: Determines the relative ordering for your portlet within a given category. The higher the number, the lower in the list your portlet will appear within that category.
  • control-panel-entry-class: The name of a class that implements the ControlPanelEntry interface which determines who can see the portlet in the control panel via an isVisible method.

By making sure to define these elements in your liferay-portlet.xml, you can theoretically add any portlet (whether they be portlets built in the Extensions environment or Plugins environment) to the new Control Panel interface.

In my case, I couldn't think of a page/layout which would fit for the plugin I was developing, but since it had administrative elements to it, I felt the best place for it would be the Control Panel. So, I updated the dtd for liferay-portlet.xml to 5.2.0 and added the appropriate control panel elements.

Unexpectedly, the new plugin portlet did not show up in the Control Panel.

After debugging com.liferay.portal.util.PortalImpl, I verified that the portlet was recognized as a Control Panel portlet, and that it was added it to the java.util.Set, but by the time it returned from the method, the portlet had disappeared. I was at a loss for why.

However, it turns out those abstract algebra classes that I took in college came in handy, and I realized where the "Gotcha!" was hidden:

Set portletsSet = new TreeSet(
	new PortletControlPanelWeightComparator());
 

By their mathematical definition, every element in a set must be unique. After checking com.liferay.portal.util.comparator.PortletControlPanelWeightComparator, I discovered that the control panel render weight is the only thing that matters for determining uniqueness of a portlet within a Control Panel category. As long as render weights are equal, they are treated as the same portlet.

In summary, my portlet shared the same render weight as another Control Panel portlet (I gave it the same render weight as My Account because I copy-pasted), so it was getting replaced as soon as the My Account portlet was added to the java.util.TreeSet. Thus, why it was clearly being added but disappeared by the time it returned from the method.

So, in the event that you are planning to leverage the new Control Panel feature for portlets that you're developing, bear in mind that you need to keep your render weights different from the portlets which exist in Liferay and from other portlets you may want to add to your Liferay instance, and you'll be okay.

Alternatively, you could extend com.liferay.portal.util.PortalImpl, override the definition of getControlPanelPortlets, and use a different comparator which checks portlet ids as well in the event of ties.

Running Ant With a Double-Click

Company Blogs December 19, 2008 By Minhchau Dang Staff

There are times when I don't want to deal with the start up time of an integrated development environment and opt to use a fast-loading text editor instead. By making this trade-off, I wind up losing many of the nifty productivity features that are built into IDEs.

One feature that can be taken for granted is the ability to run build targets with a double-click. By leaving the comfort of an IDE, I need to open up a command window (using the CmdHere PowerToy to skip a navigation step), call ant {target-name}, wait for the task to finish, then call exit once it completes successfully.

However, I realized that I don't have to lose that. I usually run default targets (so I was waiting for a command window to load just to type three letters: ant). Since I usually don't edit build.xml files, I could safely configure Windows Explorer to call ant in the appropriate directory whenever I double-click it:

  1. Navigate to the ANT_HOME directory and create a file called ant-noargs.bat. Inside of that file, add the following contents:

    @if %~nx1 neq build.xml start "" "C:\Program Files\Textpad 5\TextPad.exe" "%~f1"
    @if %~nx1 equ build.xml call ant
    @if errorlevel 1 pause
     

    The filename check is so this only applies to 'build.xml' files and so I load my default text editor for all other files. The additional error level check exists so I don't have to look at the command window at all to see if anything went wrong -- it will automatically close when it's complete and will only stay open if something went wrong.

  2. Navigate to a directory containing a build.xml file, right-click on the build.xml file, and select Open With... from the context menu. If extra options show up, select the Choose Program... option.

  3. Use the Browse... button and navigate back to the ant-noargs.bat created earlier. Confirm the selection. When returning to the dialog, hit the checkbox which reads Always use the selected program to open this kind of file and confirm by hitting Ok.

As a result of the above, deploying the extensions environment, hooks, themes, and portlets are still a double-click away (or an Enter key away if I'm currently using a keyboard to navigate in Windows Explorer) without ever loading an IDE.

Pre-Compiling Liferay JSPs

Company Blogs April 21, 2008 By Minhchau Dang Staff

A feature I recently discovered was the ability to precompile all the JSPs in Liferay. The motivation behind this discovery was the need for sanity checking after making a small change to nearly a hundred JSPs across many different portlets.

After using it on this particular problem, I wanted to know if it was possible to apply pre-compilation to a current ongoing project.

In this project, the development cycle involves testing changes on a development environment, generating a clean build from source and deploying to an existing Tomcat instance, promoting those changes to an alpha environment, validating the changes again in the alpha environment, promoting those changes again to a beta environment, and validating the changes one more time in the beta environment. This process needs be completed in a two-hour window, and so I wanted to use the pre-compilation process to reduce the time spent waiting for page loads.

So, the first question is, how do you setup Liferay for pre-compiling JSPs?

The built-in approach is to set your jsp.precompile property to on in your build.properties. In this scenario, Liferay's JSPCompiler will precompile each file individually, avoiding potential out of memory exceptions resulting from compiling large numbers of JSPs. It also has the nice property of aborting as soon as the first compilation error is encountered, making it very friendly for sanity checks. However, in a clean environment, it takes ten minutes to run to completion, making this practical only for unattended builds and sanity checks.

Therefore, a faster option was necessary. After digging into the documentation for Jasper, I discovered that faster options are available if you modify the build scripts being used in your development environment. The sequence of changes that were made in my local development environment are discussed below.

One option becomes available if you set your ANT_OPTS environment variable to give Ant a lot of memory. Once you do this, you can modify the appropriate build file (build.xml in portal-web or build-parent.xml in ext-web) to take advantage of the extra memory by calling the javac ant task directly in the compile-common-jsp task. In a clean environment, this shortens the pre-compilation time to three minutes, making this solution attractive for general deployment.

In the example build.liferay.properties:

jsp.precompile=on
jsp.precompile.fast=on
 

But I was pretty sure I could do better, since over a minute was being spent decompressing JARs. By modifying the build target specific to an application server (for example, compile-tomcat), you remove the need to decompress JARs prior to precompilation, as you know exactly where the JARs are found and can setup a simple classpath accordingly. In the particular example where your application server is Tomcat, this step reduces the JSP pre-compilation time in a clean environment to one minute, making this solution attractive for Liferay core and extensions development.

In the example build.liferay.properties:

jsp.precompile=on
jsp.precompile.fast=on
jsp.precompile.faster=on
 

In the process of creating an application-specific build target for the Liferay WAR, it's possible to take this one step further and create a full build script which compiles every plugin WAR. However, since the plugin build scripts involve hot deployment rather than direct deployment to the application server, this is a bit tricky, as the work and webapps folders may go out of synch in terms of timestamps, thus negating the performance benefit of pre-compiling your JSPs if this lack of synchronization results in a recompilation.

Therefore, in the example build script (which makes use of the Ant-Contrib library to iterate through all the plugins folders, and is available for download here), it is assumed that Liferay and the various Liferay/JSR-168 plugins have already been deployed to Tomcat's webapps folder, and the build script and related files are inside of Tomcat's webapps folder. Running ant clean will wipe the work directory, and ant all will compile the ROOT context, followed by all the different plugins.

So, I now have a script which runs in less than two minutes which generates all the appropriate Java code and class files for the JSPs in Liferay along with every plugin in the development environment, speeding up the validation process.

Since these modifications depend on Liferay being deployed to the ROOT context, it can't really be committed to core without some modifications. Still, hopefully this knowledge is also useful to someone else.

Liferay Code Format

Company Blogs December 26, 2007 By Minhchau Dang Staff

After reading through Jorge's blog post on the guidelines for Liferay contributions, and after following the link to the style guidelines on the Liferay wiki, I put together a configuration file which works with Eclipse's built-in code formatter to adjust the whitespace in your code so that is compatible with the stated Liferay guidelines.

To use this configuration file, go to your Eclipse preferences dialog. Check under Java > Code Style > Formatter (screenshot), and then Import the above linked file. A new profile called "Liferay [user]" will then be made available (screenshot). Finally, click the Apply button to set it for the current workspace.

Once this configuration file is imported and applied to the current workspace, every time you run Source > Format, Eclipse will automatically adjust the whitespace in your code to make it conform to the stated Liferay guidelines. To bulk reformat, you can right-click on any given folder while in the Navigator view (or Ctrl+Click on the folder under OS X), and select Source > Format.

I don't know of any tools which help satisfy all of the Liferay style guidelines not pertaining to whitespace. However, based on its documentation, the commercial version of Jalopy from TRIEMAX software appears to come very close with its sorting capabilities.

Eclipse Open Resource Dialog

Company Blogs November 20, 2007 By Minhchau Dang Staff

In Eclipse, everything is a plugin, including the IDE itself. So, if there's something which bothers you, all you need to do is replace the plugin with something which works in a way more consistent with your personal preferences, or if the plugin is open source, tweak the existing plugin to suit your needs.

In my case, what bothered me was this: I don't have any reason to look at .svn-base files or .class files, so I'd prefer they not show up. Playing with working sets doesn't solve the problem, because (as far as I know) there's no way to exclude .svn folders from the working set without adding all the files individually. I tried to add extension points, but I could never get the filters to show up. So, all other options exhausted, I went looking for a way to modify the source code.

In order to modify the "Open Resource" dialog, I needed to find the FilteredResourceSelectionDialog class found in the org.eclipse.ui.dialogs package in the org.eclipse.ui.ide Java archive. A bandwidth-intensive way to get the source code for that one source file was to download Eclipse Classic, and find it in the src.zip found under org.eclipse.ui.ide in the org.eclipse.platform.source folder.

I then overrode the matches method in FilteredResourceSelectionDialog.ResourceFilter to exclude all file names ending in .svn-base and .class, created a quick build file to include all the Eclipse plugins jars in my classpath, compiled, copied the resulting class files into the appropriate Java archive, and got this: a clean view showing only the files I might want to open.

Portlet Ids Bookmarklet

Company Blogs November 2, 2007 By Minhchau Dang Staff

The nice thing about Liferay's drag and drop layout is that you (usually) don't have to remember any of the portlet ids. At least, until you do.

If you skim through portal.properties, there's a set of properties all prefixed with default.user.layout which let you control this. However, it wants portlet ids, which Liferay doesn't readily give to you unless you dig through /portal/portal-web, /ext/ext-web, and /plugins/portlets.

So, I wanted to create something which converted a layout, like this into a quick reference for all the different portlet ids, like this, Without digging through the backend and/or mousing over the configuration links.

So, while I'm not familiar enough with Liferay to implement a server-side way to do this, I dug through "view page source" and found that it was pretty straightforward to write a bookmarklet which, when run on a Liferay-based page, gives me the name-to-id mapping for all the portlets currently on the page in a Liferay.Popup.

To use it, convert the script below into a bookmarklet (you can use use this one which turns up in a Google search) and paste the bookmarklet into your address bar. Voila! Instant popup containing the portlet ids.

Showing 11 results.
Items 20
of 1