Recent Bloggers

David H Nebinger

63 Publications
14 octobre 2017

Anderson Perrelli

1 Publications
9 octobre 2017

Kyle Joseph Stiemann

11 Publications
6 octobre 2017

Petteri Karttunen

2 Publications
5 octobre 2017

James Hinkey

8 Publications
5 octobre 2017

Sushil Patidar

10 Publications
4 octobre 2017

Bryan Cheung

37 Publications
4 octobre 2017

Olaf Kock

120 Publications
3 octobre 2017

Ivan Zaera

1 Publications
2 octobre 2017

Eduardo P. Garcia

12 Publications
2 octobre 2017

Troubleshooting Liferay from Source

Company Blogs 6 septembre 2017 Par Minhchau Dang Staff

Many months later, it's time to continue the series on Liferay from Source.

For this segment, we'll start with the following question: why would you have wanted to build from source in the first place?

This particular series entry assumes that maybe you want to fix a bug that affects you. Or maybe you feel like there is a feature that is missing from an existing module, but you want to contribute that change so that it gets incorporated into the Liferay upstream repository as something that Liferay maintains instead of being a custom module that you maintain. In both cases, your ultimate goal is to have your contribution benefit the larger community instead of being isolated to your own Liferay installation.

In this example, I'm going to walk you through the process of troubleshooting a bug that exists in the 7.0.3 GA4 code base but seemingly not in the current master code base. It's an example of how Liferay internally handles the two diverging branches. Along the way, you'll be introduced to Liferay's internal processes for handling bugs, and ultimately you'll find different techniques that our internal developers use for troubleshooting issues. Finally, you'll see how to deploy your changes on top of what you've already built from source.

Step 1: Encounter the Problem

First, let's talk about the problem I want to solve (which in this case happens to be a bug), which is logged in LPS-74173. Essentially, the fix for LPS-64906 was incomplete. If you disable scheduler via by setting scheduler.enabled=false, if you navigate to the forms portlet by selecting Liferay > Content > Forms, you are unable to view the Forms portlet due to an UnsupportedOperationException.

    at com.liferay.portal.workflow.WorkflowDefinitionManagerProxyBean.getActiveWorkflowDefinitions(

If you'd like to try this yourself, you will probably use a specific tag where the bug definitely exists, which is the tag corresponding to 7.0.3 GA4, aptly named 7.0.3-ga4. In case you are new to Git and do not yet know how to pull down a specific tag without pulling down every tag (and Liferay can accumulate lots of them), you can do it with these commands from the folder where you cloned the repository:

git fetch --no-tags tags/7.0.3-ga4:refs/tags/7.0.3-ga4
git checkout 7.0.3-ga4

Note that you can either build this tag from source, or you can simply use the existing release bundle. Using existing bundles is generally preferred if you simply want to use Liferay, but Liferay's decision to place everything inside of .lpkg files (an additional layer of zip files) and requiring marketplace overrides makes debugging issues and fixing them much more difficult, as you may be required to restart the server after every change for the overrides to be recognized.

Therefore, it's recommended to build from source if your end goal is to fix an issue.

Step 2: Reproduce in Master, Part 1

As part of Liferay's validation process, we try to determine if the issue is still present in the master branch. This is because in most (but not all) cases, Liferay's red tape dictates that you should try to make sure that the fix is applied to the unstable branch (master) before making it available in the stable version (in this case, 7.0.x).

Master itself doesn't have any useful tags, so if you'd like to try this yourself, you will probably use a specific hash where the bug definitely exists, because it might be fixed in the actual master by the time you view this blog post. In this case, it's known to exist at 325c62f9b7709c5c96ef484c9b050199500a41eb (which is where Liferay was when I started this blog entry), which you can checkout with the following command:

git checkout 325c62f9b7709c5c96ef484c9b050199500a41eb

After you've built Liferay from source, start up Tomcat in debug mode. If you navigate to the Forms portlet with scheduler disabled, you'll find that the issue doesn't appear.

Digging into the code, an easy conclusion to draw is that the issue does not occur in master.

You might draw this conclusion because as a side-effect of some refactoring in LPS-66761, the code throwing the exception is no longer directly included in the portlet class of the Forms portlet. However, the code still exists in a different class: WorkflowDefinitionDataProvider.

Because the code still exists, you now deal with another layer of Liferay red tape. Before you can bring the supposed fix back to a release branch, you have to prove to yourself (and to the reviewer) that this code change actually fixed the issue instead of masking the issue.

One approach is to perform a git bisect, and this is if you're fairly sure that the fix works. In my case, I'm pretty skeptical (the code throwing the stack trace hasn't been touched), so I will want to dig into where the code moved to see if a similar bug is still present.

The trick of course is figuring how to get to the new code.

Step 3: Reproduce in Master, Part 2

Because it's an OSGi component, it's possible the error goes away due to the component no longer meeting its requirements when scheduler is disabled (which would make our job really easy). So, let's ask ourselves this question: is an instance of this class still registered as an OSGi component?

Let's find out by asking Gogo shell. The following code snippet asks OSGi about all the components that claimed that they provided the DDMDataProvider service (the second argument that is currently null is a filter), and then prints out the class names. We're checking to see if our WorkflowDefinitionsDataProvider appears in the list.

each [($.context getServiceReferences "" null)] { ($.context service $it) class }

Sure enough, it shows up. This means that in theory, this code can be called, because our component is available.

Step 4: Reproduce in Master, Part 3

The next part is to figure out how to actually get the code to be called. There are a lot of different ways to do this (dig up the call stack), but since in this case the code simply moved, an easier test is to see if simply visiting the original page hits a breakpoint we set in our code. We achieve this by attaching a remote debugger, adding a breakpoint to the getData method in WorkflowDefinitionDataProvider, and visiting the Forms portlet.


Looks like our breakpoint was hit! Let's step over it and see what happens.


Liferay is swallowing the exception because WARN level logging is not enabled on the class DDMDataProviderInvokerImpl. Let's go ahead and enable it and see if we reproduce the issue. You reach the logging interface by choosing Control Panel > Configuration > Server Administration, choosing Log Levels, choosing Add Category, and adding at WARN level.

After visiting the Forms portlet again, we see the same error as we saw in 7.0.3-ga4, though the portlet still renders because the exception is swallowed.

    at com.liferay.portal.workflow.WorkflowDefinitionManagerProxyBean.getActiveWorkflowDefinitions(

Detour: Workflow Implementation Details

Now that we've encountered the bug, let's take a step back and understand why this exception was raised.

Workflow is a weird beast where a dummy implementation is always provided by default, which is WorkflowDefinitionManagerProxyBean. Therefore, if the code ever calls any methods on this default implementation, it will throw an exception. The only way for these exceptions to go away is for a different, non-default implementation to be provided. In the case of Liferay, this non-default implementation is known as Kaleo.

Looking at the code block that is throwing the exception, the call to _workflowEngineManager succeeds, but the call to _workflowDefinitionManager fails. This suggests that the non-dummy implementation is available for WorkflowEngineManager, but it is not available for WorkflowDefinitionManager. Let's use Gogo shell to confirm.

each [($.context getServiceReferences "com.liferay.portal.kernel.workflow.WorkflowDefinitionManager" null)] { ($.context service $it) class }
each [($.context getServiceReferences "com.liferay.portal.kernel.workflow.WorkflowEngineManager" null)] { ($.context service $it) class }

This code confirms that there is only a proxy implementation for WorkflowDefinitionManager, and both a proxy and a non-proxy implementation for WorkflowEngineManager. So the question is, why did the non-proxy implementation for WorkflowDefinitionManager (WorkflowDefinitionManagerImpl) not get registered to OSGi? The most likely answer is that its dependencies were not satisfied.

We can take a look at each of its non-optional @Reference annotations to see if any exist.

$.context getServiceReferences "com.liferay.portal.workflow.kaleo.service.KaleoDefinitionLocalService" null
$.context getServiceReferences "com.liferay.portal.workflow.kaleo.KaleoWorkflowModelConverter" null
$.context getServiceReferences "com.liferay.portal.kernel.workflow.comparator.WorkflowComparatorFactory" null

Running these commands, we find that KaleoDefinitionLocalService and KaleoWorkflowModelConverter have not been satisfied. Interesting, KaleoDefinitionLocalService has the naming format for a Liferay service builder service (it ends with the words LocalService), and a quick investigation inside of the source code in an IDE reveals that it is a service builder service provided by the portal-workflow-kaleo-service, made available in a bundle with the symbolic name com.liferay.portal.workflow.kaleo.service.

Detour: Wiring Spring into OSGi

If you've been Liferay for awhile, you know that service builder services are not created and managed by OSGi, but rather they are created and managed by Spring. However, a lot of Liferay stuff uses it as though OSGi manages it. You might wonder, how does that actually work?

Liferay is coded so that a bundle waits for a variety of things to be registered to OSGi, and then it has code that will register its Spring managed dependencies as OSGi components.

Of course, the whole registration process has dependencies, which are managed in a build time generated file OSGI-INF/context/context.dependencies for each bundle. These are then assumbled into a placeholder component and then registered with the Apache Felix dependency manager.

If for any reason any of these components have unsatisfied dependencies (one common example being the Release component for the module, like in LPS-66607), the bundle will be active, but none of the Spring-managed components (such as message listener configurations) will be made available as OSGi components. As you can imagine, this means that all the OSGi components that depend on those Spring-managed components being registered will also fail to resolve.

Naive Debugging

One way to troubleshoot this issue is to construct a Gosh script that basically does what we've been doing in all the examples above. It will read in OSGI-INF/context/context.dependencies from the bundle and check each entry to see if there are any service references that match it, similar to what we have been manually doing with the previous commands. If something exists which satisfies the reference, you get a check mark. If not, you get a warning exclamation point and the text will be in red.

In other words, if there is at least one warning exclamation point (or one line in red) in the output of this script, none of the Spring managed dependencies for this bundle will be registered as OSGi components!

Note that it's not valid to paste the following directly into a shell (every line break is treated as a command, which will not work given that some closures span multiple lines).

math = ($.context bundle 0) loadclass "java.lang.Math"
stringutil = ($.context bundle 0) loadclass "com.liferay.portal.kernel.util.StringUtil"

id_map = [""=0]
each [($.context bundles)] { $id_map put ($it symbolicname) ($it bundleid) }

checkref = {
    pos = $args indexOf " "
    has_filter = $pos equals ($math abs $pos)
    service = if { $has_filter } { $args substring 0 $pos } { $it }
    filter = if { $has_filter } { ($args substring ($service length)) trim }

    refs = $.context servicereferences $service $filter
    color = if { $refs } { "\u001B[0;0m" } { "\u001B[1;31m" }
    satisfied = if { $refs } { "[✔]" } { "[!]" }
    echo "$color" "$satisfied" "$args" "\u001B[0;0m"

check = {
    bundle_id = if { (($args class) name) equals "java.lang.String" } { $id_map get $args } { $args }

    bundle = $.context bundle $bundle_id
    url = $bundle entry "OSGI-INF/context/context.dependencies"
    dependencies = []
    if { $url } { $stringutil readlines ($url openstream) $dependencies }

    echo ''
    echo "$bundle"
    echo ''
    each $dependencies $checkref

# List the bundle ids or symbolic names that you wish to check here

check "com.liferay.portal.workflow.kaleo.service"

# Blank line echo to suppress output

echo ''

Copy it into a text file, which I'll assume will be named check_context_dependencies.gosh and stored in /tmp. If you're diagnosing something other than com.liferay.portal.workflow.kaleo.service (or you're checking multiple services), update the check lines accordingly.

You can run this script by using the gosh command provided by Gogo shell. You will need to specify the file path using a file URI. In this example, the local file is located in /tmp/check_context_dependencies.gosh, which means we'd use the following command to execute it:

gosh file:///tmp/check_context_dependencies.gosh

We'll get the following as script output.

[✔] com.liferay.counter.kernel.service.CounterLocalService
[✔] com.liferay.portal.kernel.dao.orm.EntityCache
[✔] com.liferay.portal.kernel.dao.orm.FinderCache
[✔] com.liferay.portal.kernel.model.Release (&(
[✔] com.liferay.portal.kernel.scheduler.SchedulerEngineHelper
[!] com.liferay.portal.kernel.scheduler.TriggerFactory
[✔] com.liferay.portal.kernel.service.ClassNameLocalService
[✔] com.liferay.portal.kernel.service.ClassNameService
[✔] com.liferay.portal.kernel.service.PersistedModelLocalServiceRegistry
[✔] com.liferay.portal.kernel.service.ResourceLocalService
[✔] com.liferay.portal.kernel.service.RoleLocalService
[✔] com.liferay.portal.kernel.service.UserLocalService
[✔] com.liferay.portal.kernel.service.UserService
[✔] com.liferay.portal.kernel.service.persistence.ClassNamePersistence
[✔] com.liferay.portal.kernel.service.persistence.CompanyProvider
[✔] com.liferay.portal.kernel.service.persistence.RolePersistence
[✔] com.liferay.portal.kernel.service.persistence.UserPersistence
[✔] com.liferay.portal.workflow.kaleo.runtime.calendar.DueDateCalculator

Interesting, so it looks like in order for all of our Spring managed dependencies to be added as OSGi components, we must first make sure that com.liferay.portal.kernel.scheduler.TriggerFactory is also available. If we look at the package name which includes the word "scheduler", we can draw the conclusion that TriggerFactory is not available because we disabled scheduler, and so nothing exists in our environment to satisfy this dependency.

Smarter Debugging

While the approach above is what I did when I was troubleshooting the issue, I was later told that there is a built-in way to do this, because a closer look at the source code for ModuleApplicationContextExtender that I linked above shows that we're using Apache Felix's dependency manager to manage Liferay's Spring dependencies.

The documentation provides shortened variants on the command options (such as b) that are not available in the version of Apache Felix shipped with Liferay, but the longer command variant works, giving us this for troubleshooting our issue:

dm bundleIds "com.liferay.portal.workflow.kaleo.service"

If you aren't sure which bundle it is that is failing to work, then you can express your frustration with the acronym for a very popular expletive, and the command will report all services that are currently failing to resolve, even though some OSGi component is asking for them to be resolved.

dm wtf

The former command asking for bundleIds as a parameter has to be parsed visually in order to understand which component is failing, but the latter command that only provides wtf only lists the ones that are failing. In both cases, we see that com.liferay.portal.kernel.scheduler.TriggerFactory and many service builder services are missing, just as was discovered in our naive analysis.

Step 5: Choose Your Battle

If we track down why TriggerFactory is needed by com.liferay.portal.workflow.kaleo.service, we find that KaleoTimerInstanceTokenLocalServiceImpl wants to use it in order to add scheduled jobs related to workflow timers.

Knowing this, let's look back at the facts so far. As we were troubleshooting the issue, we made the following observation:

Looking at the code block that is throwing the exception, the call to _workflowEngineManager succeeds, but the call to _workflowDefinitionManager fails.

With this in mind, and with our discovery for TriggerFactory in mind, you might consider the following question: given that scheduler is disabled, is the bug that the call to _workflowEngineManager succeeds, or is the bug that the call to _workflowDefinitionManager fails?

You could argue it both ways, depending on whether you agree with the following statement: workflow timers should be treated a required part of the Kaleo workflow implementation, and that discussion would probably happen with the component team and the project owner team for the Kaleo workflow component. Whether you're an internal user or an external user, you could start that conversation by assuming one stance, submitting your changes via a pull request to the component team, and then using the pull request for the conversation.

However, as a contributor, you might not be very deeply invested in the correct answer to this question, and so the thought of starting this conversation doesn't appeal to you. After all, you simply wish to fix a bug that affects other people. Understanding the nuances of how workflow should work sounds like an unnecessary use of your time.

What can you do instead? Well, you'd re-frame the question.

Instead of asking whether timers should work, you could answer the more specific question that applies to our current bug: if the only thing we're using _workflowEngineManager for is to determine whether or not workflow is available, and we know that it's giving us a misleading answer, and we can achieve what we want by changing the way we're pulling in the _workflowDefinitionManager reference, should we keep the logic that uses the misleading _workflowEngineManager manager reference?

This one has a more obvious answer: no.

Step 6: Test Your Changes, Part 1

In this case, we can update WorkflowDefinitionsDataProvider to simply wait for non-proxy implementations of WorkflowDefinitionManager, which we will assume simply provides the property proxy.bean=false.

    cardinality = ReferenceCardinality.OPTIONAL,
    policyOption = ReferencePolicyOption.GREEDY,
    target = "(proxy.bean=false)")
private WorkflowDefinitionManager _workflowDefinitionManager;

For ReferenceCardinality, because we don't need one to function (we're using it for a null check), and we only expect one to be provided, "The reference is optional and unary" is what best describes the cardinality for what we want, so we choose OPTIONAL.

For ReferencePolicyOption, because we don't expect it to be provided immediately, and we want to function properly should one be deployed, "When a new target service for a reference becomes available, ... bind the new target service" is what best describes the policy option for what we want, and so we choose GREEDY.

Now, how to deploy the change? Navigate to the folder containing our change, and call the gradlew command located at the root of the portal source.

cd modules/apps/forms-and-workflow/dynamic-data-mapping/dynamic-data-mapping-data-provider-instance
../../../../../gradlew deploy

After deploying the change, visit the Forms page. Looks like our issue has been resolved. Let's make sure that our component is still available.

each [($.context getServiceReferences "" null)] { ($.context service $it) class }

We still see our WorkflowDefinitionsDataProvider available, so everything checks out.

Step 7: Test Your Changes, Part 2

Up until this point you have everything you need to run the change locally. The last step is to make your change available to others, possibly submitting it as a bugfix. To do that, the next thing to confirm is whether you made the change in the correct repository.

In this example, we made the change to the liferay-portal repository. However, this isn't always the correct place to create a fix.

So, how do you know if something has moved to a subrepository? This was briefly touched on in the previous blog entry in this series.

What's in the master branch of liferay-portal is actually not the latest code. Liferay has started moving things into subrepositories, which you can see from the hundreds of strangely named repositories that have popped up under the Liferay GitHub account.

However, a lot of these repositories are just placeholders. These placeholders are in what's called "push" mode, where code from the liferay-portal repository is pushed to the subrepository. However, a handful of them (five at the time of this writing) are actually active where they're in what's called "pull" mode, where code is pulled from the subrepository into the liferay-portal repository on-demand. You know the difference by looking at the .gitrepo file in each subrepository and checking the line describing the mode.

So in practice, find the file you're modifying, and then see if any parent folder has a .gitrepo file. If it does, check inside of that file for a line that says mode = pull. In this case, the .gitrepo located in dynamic-data-mapping, which is the parent folder for the dynamic-data-mapping-data-provider-instance module that we deployed, does say that our mode is pull.

Therefore, we'll need to clone the com-liferay-dynamic-data-mapping repository, we would update to the latest master branch of this repository, we would deploy the module to confirm that the issue still exists, and we would then apply our change to make sure that it fixes the issue.

Step 8: Submit Your Changes

After finding right repository, the next step is to make sure you've followed the Contributors Guide located on the GitHub repository.

There is a vague step in the guide, which is finding the core maintainer to review your changes so that it can actually be included in Liferay itself.

While Liferay has started a CODEOWNERS file, it's still only in the evaluation phase and so the file only has a single name in it. Given that, the easiest way is to leverage the pre-existing people who are dedicated to creating fixes and routing them to the correct people: Liferay's support department. A few people who work in Liferay's support department hang out in the Liferay Community Slack, so it should be easy to convince someone to take a look at your proposed changes and at least begin the conversation.

Once that conversation starts, the next step after that is a lot of patience while you wait for your changes to be included in the product. As the saying goes, you can lead a Liferay core maintainer to a pull request, but you can't make them merge.

Getting Started with Building Liferay from Source

Company Blogs 18 mai 2017 Par Minhchau Dang Staff

When a new intern onboards in the LAX office, their first step is to build Liferay from source. As a side-effect of that, usually the person that handles the intern onboarding will see that the reported ant all time is in hours rather than in minutes, and then start asking people, "Is it normal for it take this long to build Liferay for the first time?"

It happens frequently enough that sometimes my hyperactive imagination supposes that a UI intern's entire first day must be spent cloning the Liferay GitHub repository and building it from source, and then going home.

In hindsight, this must be what the experience is like for a developer in the community looking to contribute back to Liferay as well. Though, rather than go home in a literal sense, they might stare at the source code they've downloaded and built (assuming they got that far) and think, "Wow, if it took this long just to get started, it must be really terrible to try to do more than that," and choose to go home in a less literal way, but with more dramatic flair.

One of the many community-related discussions we have been having internally is how we can make things better, both internally and externally, when it comes to working with Liferay at the source code level. Questions like, "How do we make it easier to compile Liferay?" or "How do we make it easier to debug Liferay?" After all, just how open source are you if it's an uphill battle to compile from source in order find out if things are already fixed in branch?

We don't have great answers to these problems yet, but I believe that we can, at a minimum, provide a little more transparency about what we are trying internally to make things better for ourselves. Sharing that information might give all of us a better path forward, because if nothing else, it lets us ask important questions about the pain points rather than bikeshed color ones.

Step 1: Upgrade Your Build System

Let's say I were to create a survey asking the question, "Which of the following numbers best describes your build time on master in minutes (please round up)?" and gave people a list of options, ranging from 5 minutes and going all the way up to two hours.

This question makes the unstated assumption that you are able to successfully build master from source, because none of the options is, "I can't build master from source." Granted, it may seem strange that I call that an "assumption", because why would you not be able to build an open source product from source?

Trick question.

If you've seen me at past Liferay North America Symposiums, and if you were really knowledgeable about Dell computers in the way that many people are knowledgeable about shoes or cars, you'd know that I've been sporting a Dell Latitude E6510 for a very long time.

It's a nice machine, sporting a mighty 8 GB of RAM. Since memory is one of the more common bottlenecks, this made it at least on-par with some of the machines I saw developers using when I visited Liferay clients as a consultant. However, to be completely honest, a machine with those specifications has no hope of building the current master branch of Liferay without intimate knowledge of Liferay build process internals. Whenever I attempted to build master from source without customizing the build process, my computer was guaranteed to spontaneously reboot itself in the middle.

So why was this not really a problem for other Liferay developers?

Liferay has a policy of asking its developers to accept upgrades to their hardware once every two to three years. The idea is that if new hardware increases your productivity, it's such a low cost investment that it's always worthwhile to make. A handful of people resist upgrades (inertia, emotional attachment to Home and End keys, etc.), but since almost everyone chooses to upgrade, Liferay has an ivory tower problem, where much of Liferay has no idea what it's like to even start up Liferay on an older machine, not to even discuss what it's like to compile Liferay on those older machines.

  • Liferay tries to do parallel builds, which consumes a lot of memory. To successfully build Liferay from source, a dedicated build system needs 8 GB of memory, while a developer machine with an IDE running needs at least 16 GB of memory.
  • Liferay writes an additional X GB every time you build, a lot of it being just copies of JARs and node_modules folders. While it will succeed on platter drives, if you care about build time, you'll want Liferay source code to live on a solid-state drive to handle the mass file creation.

So eventually, I ran into a separate problem which required a computer upgrade, I needed to run a virtual machine that itself wanted 4 GB, and so that combined with running Liferay alongside an IDE meant my machine wasn't up to handling my task. After upgrading, the experience of building Liferay is substantially different from how it used to be. While I have other problems like an oversensitive mousepad, building Liferay is no longer something that made me wonder what else could possibly go wrong.

If you weren't planning on upgrading your computer in the near future, it doesn't make sense to upgrade your computer just to build Liferay. Instead, consider spinning up a virtual machine in a cloud computing environment that has the minimum requirements, such as something in your company's internal cloud infrastructure, or even a spot instance on Amazon EC2. Then you can use those servers to perform the build and you can download the result to your local computer.

Step 2: Clone Central Repository

So let's assume you've got a computer or virtual machine satisfying the requirements listed above. The next step is to get the source code so you can use this machine to build Liferay from source. This is the command you would use to do it:

git clone

However, the first step that interns get hung up on is waiting for this clone to complete. If you've ever tried to do that, you'll find that Liferay has violated one of the best practices of version control and we've committed a large folder full of binary files, .gradle. As a result of having this massive folder, GitHub sends us angry emails and, of course, cloning our repository takes hours.

How does Liferay make this better internally? Well, in the LAX office, the usual answer is to plug in the ethernet cable. Liferay invested heavily in fast internet, and so simply plugging in the ethernet cable makes the multi-hour process finish in 30 minutes.

However, it turns out that there is actually a better answer, even in the LAX office. Each office has a mirror that holds archives of various GitHub repositories, including liferay/liferay-portal. We suspect the original being mirrored is maintained by Quality Assurance, because we have heard that keeping all of our thousands of automated testing servers in sync used to result in angry emails from GitHub. Since it's an internal mirror, this means that downloading X GB and unzipping it takes a few minutes, even over WiFi, and it's on the order of seconds if you plug in your ethernet cable.

So, in order to improve our internal processes, we've been trying to get the people who manage our new hires and new interns to recognize that such a mirror exists and to use it during their onboarding process to save a lot of time for new hires on their first day.

So what does this mean for you?

Essentially, if you plan to clone the code directly onto your computer for simplicity, you'll need to make sure that it's during a time where you won't shut down the computer for a few hours and when you don't need it for anything (maybe run it overnight), because it's a time-consuming process.

Alternately, have a remote server perform the clone, and then download an archive of the .git folder to your local computer, similar to what Liferay is trying to do internally. This will free up your machine to do useful things, and even spinning up Amazon EC2 spot instances (like an m1.small) and bringing things down with either SCP or an S3 bucket as an intermediate point may be beneficial.

Step 3: Clone the Binaries Cache

We mentioned the very large .gradle folder, but something else we noticed over time is that both master and 7.0.x share a lot of libraries, and they were constantly getting rewritten as you switched between branches. So, to make this situation slightly more tolerable, what we've done is we've created a separate repository just for binaries. When building Liferay, you will also want a copy of this binaries cache. For convenience, make it a sibling folder of the portal source folder you cloned in the previous step.

git clone

Step 4: Build Central Repository

The next step is your first build from source. This is done with a single command that theoretically handles everything. However, before you run this single command, you might need to do things to reduce the number of resources it consumes.

  • Liferay issues a lot of requests to the NPM registry in parallel builds. You can cap this by checking for nodejs.npm.args, and taking the commented out line and adding it to your own
  • Liferay includes a lot of extra things most people never need. You can remove these by checking for build.include.dirs and using its commented out value in your, or adjusting it for your needs if you want more than what it tries by default.
  • If you're on Windows, disable Windows Defender (or at least disable it on specific folders or drives). The ongoing scan drastically slows down Liferay builds.

After you've thought through all of the above, you're almost ready for the command itself. When Liferay introduced the liferay-binaries-cache-2017, it also introduced another way for the build to fail: Liferay's build process tries to auto-update this cache at build time, but since it's constantly synchronizing this folder, you might actually run into a situation where the cache cannot update because it already contains the files it wants to add! So you'll need to add extra commands to clean up the folder before you build.

cd liferay-binaries-cache-2017
git clean -xdf
git reset --hard
cd ../liferay-portal

At this point, you are now ready for the command itself, which requires that you download and install Apache Ant. After knowing that this is what I'm asking you to download, you might also realize that this means that the entry point for everything is build.xml.

ant all

So now you've built the latest Liferay source code, right?

Another trick question!

What's in the master branch of liferay-portal is actually not the latest code. Liferay has started moving things into subrepositories, which you can see from the hundreds of strangely named repositories that have popped up under the Liferay GitHub account.

However, a lot of these repositories are just placeholders. These placeholders are in what's called "push" mode, where code from the liferay-portal repository is pushed to the subrepository. However, a handful of them (five at the time of this writing) are actually active where they're in what's called "pull" mode, where code is pulled from the subrepository into the liferay-portal repository on-demand. You know the difference by looking at the .gitrepo file in each subrepository and checking the line describing the mode.

However, because all of those files are actually also on the central repository, after you've cloned the liferay-portal repository, you can use the files there to find out which subrepositories are active with git, grep, and xargs magic run from the root of the repository.

git ls-files modules | grep -F .gitrepo | xargs grep -Fl 'mode = pull' | xargs grep -h 'remote = ' | cut -d'=' -f 2

I will dive into more detail on the subrepositories in a later entry when we talk about submitting fixes, but for now, they're not relevant to getting off the ground running other than an awareness that they exist, and an awareness that additional wrinkles exist in the fix submission process as a side-effect of their existence.

Step 5: Choose an IDE

At this point, you've built Liferay, and the next thing you might want to do is point an IDE with a debugger to the artifact you've built, so that you can see what it's doing after you start it up. However, if you point an IDE to the Liferay source code in order to load the source files, whether it's Netbeans, Eclipse, or IntelliJ, you'll notice that while Liferay has a lot of default configurations populated, but these default files are missing about 90% of Liferay's source folders.

If you're using Netbeans, given the people who have forked the Liferay Source Netbeans Project Builder overlap exactly with the team I know for sure uses Netbeans, this tool will help you hit the ground running. Since there are recent commits to the repository, I have confidence that the Netbeans users team actively maintains it, though I can't say with equal confidence how they'll react to the news that I'm telling other people about it.

If you're using Eclipse or Liferay IDE, then Jorge Diaz has you covered with his generate-modules-classpath script, which he has blogged about in the past, and his blog post explains its capabilities much clearly than I would be able to in a mini-section at the end of this getting started guide.

If you're using IntelliJ IDEA Ultimate, you can take advantage of the liferay-intellij project and leave any suggestions. It was originally written as a streams tutorial for Java 7 developers rather than as a tool, and I still try to keep it as a streams tutorial even as I make improvements to it, but I'm open to any improvement ideas that make people's lives easier in interacting with Liferay core code.

Step 6: Bend Liferay to Your Will

So now that everything is setup for your first build, and you're able to at least attach a debugger to Liferay, the next thing is to explain what you can do with this newly-discovered power.

However, that's going to require walking through a non-toy example for it to make sense, so I'll do that in my next post so that this one can stay as a "Getting Started" guide.

Pre-Compiling Liferay JSPs Take 2

Company Blogs 9 janvier 2013 Par Minhchau Dang Staff

Many years ago, Liferay's build validation was only getting started and so JSP pre-compilation related slowness wasn't a big deal (I think it was really only used during the distribution process). However, I wanted to make it faster for a client-related project (where we couldn't really do a server warm-up after deployment) and the research culminated in a blog entry summarizing my findings.

Fast forward a few years later and I found out that Liferay QA was suffering from build slowness because Selenium tests have the same requirement of not being able to do a server warm-up before doing tests, and so pre-compilation is turned on. Thus came an improvement to the core build process in LPS-14215 based on the findings of that blog entry.

Fast forward a few more years and JSP pre-compilation is still a standard part of our distribution process. However, after an upgrade to Tomcat 7 for Liferay 6.1, we released milestones for community testing, and the community (as well as other core engineers) reported that everything was very slow the first time you loaded the portal. Poking around, we discovered that Tomcat was recompiling our pre-compiled JSPs when development mode was active.

In a nutshell, we found that any pre-compilation (including the example provided in the Tomcat 7 documentation) will provide zero performance benefit. This is because rather than assuming class files newer than JSP files were up-to-date, Tomcat used the exact file timestamp to determine that, making sure that when Jasper generated the class files, it updated the modified timestamp of the file to be the same as the JSP. This new feature allowed older files to replace newer ones, in case you rolled back a change to a JSP and re-copied it to the application server, but broke JSP pre-compilation.

Thus, we fixed our own tools with LPS-23032. The TimestampUpdater stand-alone program walks the entire directory and makes sure that the class files have the same timestamps as the java files, which (thanks to Jasper itself behaving properly) will have the same timestamps as the jsp files. All this together makes JSP pre-compilation effective again.

Which leads me to the point of this blog entry. Which is, the example script in the previous blog entry has been updated for Tomcat 7 and invokes Liferay's TimestampUpdater to fix file timestamps. This updated example is available in my document library (download here). See the README.txt contained in the example script for additional instructions on how to use it.

Note that this only works for Liferay 6.1 and beyond, because we didn't have to add the class in previous releases (our default bundle was Tomcat 6). If you need to deploy Liferay 6.0 or earlier against Tomcat 7 for whatever reason, you can extract the TimestampUpdater from the portal-impl.jar of Liferay 6.1 and put it in the portal-impl.jar for your version of Liferay.

Avoiding Accidental Automatic Upgrades

Company Blogs 4 janvier 2013 Par Minhchau Dang Staff

Once upon a time, a long time Liferay user decided to upgrade from an old version of Liferay (Liferay 6.0 GA2) to a new version of Liferay (Liferay 6.1 GA2). They performed this upgrade by shutting down Liferay 6.0 running on an old VM, starting Liferay 6.1 on a new VM, and then doing some DNS magic and some load balancer magic to switch to using the new VM after the upgrade completed.

A day later, a routine Cron job restarted Liferay 6.0 on the old VM, which still pointed at the same database used by Liferay 6.1.

Liferay 6.0's upgrade code checked to see if it had to run any of its upgrade processes, found that the build number stored in the Release_ table (6101) was higher than any of the upgrade processes it wanted to run, and proceeded to update the Release_ table with its own version number (6006).

Naturally, because nobody was accessing that old version of the portal, it didn't matter. It didn't touch any other tables, so Liferay 6.1 continued to run fine. That is, until Liferay 6.1 had to be restarted a week later for some routine maintenance.

As Liferay restarted, Liferay 6.1's upgrade code checked to see if it had to run any of its upgrade processes, found that the build number stored in the Release_ table (6006) was lower than some of the upgrade processes it wanted to run, and so ran them it did. Of course, because the data was actually in the 6101 state, all of these upgrade processes wound up corrupting the data, rendering the whole Liferay instance unusable.

No problem, the client had followed industry best practices and regularly backed up their system. So, they restored the database from yesterday's backup, thinking that all would now be well.

Alas, it was not meant to be. The damage to the database had been done a week before, and so the bad data was already in the database. Every time the client restored the database from yesterday's backup , Liferay re-detected the version number was different and that it needed to re-upgrade from 6006 to 6101. As a result, Liferay obediently re-corrupted the database.

This raises the following question: how do we avoid the data corruption resulting from an older release running against a newer release's database?

Enter LPS-21250, which prevents Liferay from starting up if its release number is lower than what is seen in the Release_ table. This means that if this situation were to repeat with 6.1 GA2 and 6.2 M3, 6.1 GA2 would fail to start up and not update the Release_ table.

Of course, this doesn't fix the problem of earlier versions of Liferay (such as Liferay 6.0) updating the Release_ table, raising a second question: how do we handle the problem where earlier versions of Liferay (such as Liferay 6.0) might update the Release_ table?

In Liferay 6.0 and in Liferay 6.1 GA1, it is possible to prevent upgrade processes from running by setting the upgrade.processes property to blank. The Release_ table will still be updated, but at least Liferay wouldn't attempt to re-corrupt its own data. This strategy will work to prevent upgrade processes from running if you were to accidentally start a Liferay 5.x release against a Liferay 6.0 database or a Liferay 6.1 GA1 database.


In Liferay 6.1 GA2, a blank value for upgrade.processes has been given a new meaning while adding support for LPS-25637: automatically try to detect the upgrade processes that need to be run. So, this workaround no longer works.

So at this time, for Liferay 6.1 GA2 and possibly the Liferay 6.2 releases, you have two ways of handling the problem of a Liferay 5.x or a Liferay 6.0 running against your database. You can either create a no-op upgrade process, or you can prevent Liferay from starting up at all if it detects a version number change.

In the former case, there's no native UpgradeProcess in Liferay 6.1 which truly does nothing. So, you can either use an upgrade process that is very unlikely to run (such as the 4.4.2 to 5.0.0 upgrade process), or you'll have to write an EXT plugin which contains a custom do-nothing upgrade process and specify it in your portal properties, or you'll have to specify a bad value for upgrade.processes and get a stack trace on every startup.


In the latter case, LPS-25637 also introduced a new feature which skips all upgrade processes if there's a version number match (previous versions of Liferay would check each one individually). Leveraging that, you can make use of a built-in run-on-all-releases upgrade process that was originally intended to allow you to split an upgrade in chunks, and effectively does nothing more than shut down Liferay.


Updating Document Library Hooks

Company Blogs 28 août 2009 Par Minhchau Dang Staff

A common problem that comes up is the need to change how documents are stored in the Document Library.

For example, you may start out storing documents using Jackrabbit hooked to a database. However, as time goes on and you find yourself using more Liferay deployments, the number of database connections reserved for Jackrabbit alone gets dangerously close to the maximum number of database connections supported by your database vender.

So, you want to switch from using JCRHook over to using the FileSystemHook to store documents on a SAN.

If your migration uses two different hooks and you have access to the portal class loader (for example, you're running in the EXT environment, or you have the ability to update the context class loader for your web application like in Tomcat 5.5), the solution is straightforward.

InputStream exportStream =
		companyId, folderId, fileName, versionNumber);

	companyId, portletId, groupId, folderId, fileName,
	versionNumber, fileName, fileEntryId, properties, modifiedDate,
	tagsCategories, tagsEntries, exportStream);

In summary, you instantiate the hook objects, pull data from your source hook and push it to your target hook. Then, update your configurations and restart your server to use the new document library hook. (In case you're not sure how to change document library hooks, see the comments for the dl.hook.impl property in

If you'd like to see how this is accomplished via code samples rather than via textual explanations, these document links might help (just to note, the sample plugins SDK portlets leverage the portal class loader in order to access the classes found in portal-impl.jar, and so you must use Tomcat 5.5 or another servlet container that supports a similar mechanism in order to use them):

However, it's not always so easy. Another common problem is where you start out using Jackrabbit hooked up to a file system (perhaps in the earlier iterations of Liferay where JCRHook was the default), but you want to move to a clustered environment and you do not have access to a SAN.

Therefore, you want to migrate over to using Jackrabbit hooked to a database.

This is different from the previous problem in that you're using the same exact hook in both cases, and the way Liferay handles Jackrabbit inside of this hook makes it so that you can only have one active Jackrabbit workspace configuration (specified in your file), and so simply instantiating two different hooks is not possible.

The solution here is to run the migration twice.

First, with the original JCRHook configuration, export the data to an intermediate hook (for example, the Liferay FileSystemHook) and shut down your Liferay instance. Then, you update and/or repository.xml to reflect the desired JCRHook configuration, and import from your intermediate hook back to JCRHook.

If you do not have access to the portal class loader, the migration is less straightforward, because you won't have access to the hook classes themselves.

InputStream exportStream =
		companyId, folderId, fileName, versionNumber);

FileUtil.write(intermediateFileLocation, exportStream);

InputStream importStream =
	new FileInputStream(intermediateFileLocation);

	companyId, portletId, groupId, folderId, fileName,
	versionNumber, fileName, fileEntryId, properties, modifiedDate,
	tagsCategories, tagsEntries, importStream);

In summary, you'll have to read the documents using DLLocalServiceUtil with the original configuration and save them to disk in a way where you can parse all the data needed to call updateFile (perhaps in the same way that mirrors the way FileSystemHook works). Then, you re-import those exported documents using DLLocalServiceUtil after updating your configuration and restarting the server.

Using the Dynamic Query API

Company Blogs 17 août 2009 Par Minhchau Dang Staff

Assuming that indexes have already been created against the fields you're querying against, the Dynamic Query API is a great way to create custom queries against Liferay objects without having to create custom finders and services.

However, there are some "gotchas" that I've bumped into when using them, and I'm hoping that sharing these experiences will help someone out there if they're banging their head against a wall.

One gotcha was getting empty results, even though the equivalent SQL query was definitely returning something.

By default, the dynamic query API uses the current thread's class loader rather than the portal class loader. Because the Impl classes can only be found in Liferay's class loader, when you try to utilize the Dynamic Query API for plugins that are located in a different class loader, Hibernate silently returns nothing instead.

A solution is to pass in the portal class loader when you're initializing your dynamic query object, and Hibernate will know to use the portal class loader when looking for classes.

DynamicQuery query =
		UserGroupRole.class, PortalClassLoaderUtil.getClassLoader());

A second gotcha was figuring out how to use SQL projections (more commonly known as the stuff you put in the SELECT clause), the most common cases being to select a specific column or to get a row count.

The gotcha was that the sample documentation on the Liferay Wiki uses Hibernate's DetachedCriteria classes, and trying to add Projections.rowCount() to a DynamicQuery object gave me a compile error, because the Liferay DynamicQuery object requires using Liferay's version of the Projection class, rather than the Hibernate version.

To resolve this gotcha, you can use ProjectionFactoryUtil to get the appropriate object.


A third gotcha was a "could not resolve property" error when I tried to add a restriction via RestrictionsFactoryUtil. Even though it looked like the bean class definitely had that attribute defined in portal-hbm.xml, Hibernate wasn't able to figure out what property needed to be used.

The gotcha is that some of Liferay's objects use composite keys. When using composite keys with Hibernate's detached criteria and Liferay's dynamic queries, the name of the property must include the name of the composite key. In the case of the Hibernate definitions created by Liferay's Service Builder, composite keys are always named primaryKey.

Therefore, the solution is to use primaryKey.userId instead of userId.

Criterion criterion =
	RestrictionsFactoryUtil.eq("primaryKey.userId", userId);

A fourth gotcha is that even if you don't specify a projection (thus resulting in a default of selecting all columns and implied "give me the entity"), casting directly to a List<T> won't work as it does in custom finders, because you're getting back a List<Object>, not a List.

The quick and dirty solution is to either (a) use addAll on a List and typecast (simulating what happens in a custom finder), or (b) add each result to a List<T> (cleaner to read).

Adding a Plugins Portlet to the Control Panel

Company Blogs 30 décembre 2008 Par Minhchau Dang Staff

A "Gotcha!" situation came up today when I attempted to add a portlet developed in the plugins environment to the Control Panel (which is a new feature being developed for 5.2.0 that you can read about in Jorge's blog entry).

In a nutshell, the Control Panel provides a centralized administrative interface for the entire portal. Unlike a standard layout where you 'manage pages' and put portlets anywhere, whether or not a portlet shows up inside of the Control Panel depends on whether or not you've set the following nodes in your liferay-portlet.xml:

  • control-panel-entry-category: The 'category' where your portlet will appear. There are currently 4 valid values for this element: 'my', 'content', 'portal', and 'server'.
  • control-panel-entry-weight: Determines the relative ordering for your portlet within a given category. The higher the number, the lower in the list your portlet will appear within that category.
  • control-panel-entry-class: The name of a class that implements the ControlPanelEntry interface which determines who can see the portlet in the control panel via an isVisible method.

By making sure to define these elements in your liferay-portlet.xml, you can theoretically add any portlet (whether they be portlets built in the Extensions environment or Plugins environment) to the new Control Panel interface.

In my case, I couldn't think of a page/layout which would fit for the plugin I was developing, but since it had administrative elements to it, I felt the best place for it would be the Control Panel. So, I updated the dtd for liferay-portlet.xml to 5.2.0 and added the appropriate control panel elements.

Unexpectedly, the new plugin portlet did not show up in the Control Panel.

After debugging com.liferay.portal.util.PortalImpl, I verified that the portlet was recognized as a Control Panel portlet, and that it was added it to the java.util.Set, but by the time it returned from the method, the portlet had disappeared. I was at a loss for why.

However, it turns out those abstract algebra classes that I took in college came in handy, and I realized where the "Gotcha!" was hidden:

Set portletsSet = new TreeSet(
	new PortletControlPanelWeightComparator());

By their mathematical definition, every element in a set must be unique. After checking com.liferay.portal.util.comparator.PortletControlPanelWeightComparator, I discovered that the control panel render weight is the only thing that matters for determining uniqueness of a portlet within a Control Panel category. As long as render weights are equal, they are treated as the same portlet.

In summary, my portlet shared the same render weight as another Control Panel portlet (I gave it the same render weight as My Account because I copy-pasted), so it was getting replaced as soon as the My Account portlet was added to the java.util.TreeSet. Thus, why it was clearly being added but disappeared by the time it returned from the method.

So, in the event that you are planning to leverage the new Control Panel feature for portlets that you're developing, bear in mind that you need to keep your render weights different from the portlets which exist in Liferay and from other portlets you may want to add to your Liferay instance, and you'll be okay.

Alternatively, you could extend com.liferay.portal.util.PortalImpl, override the definition of getControlPanelPortlets, and use a different comparator which checks portlet ids as well in the event of ties.

Running Ant With a Double-Click

Company Blogs 19 décembre 2008 Par Minhchau Dang Staff

There are times when I don't want to deal with the start up time of an integrated development environment and opt to use a fast-loading text editor instead. By making this trade-off, I wind up losing many of the nifty productivity features that are built into IDEs.

One feature that can be taken for granted is the ability to run build targets with a double-click. By leaving the comfort of an IDE, I need to open up a command window (using the CmdHere PowerToy to skip a navigation step), call ant {target-name}, wait for the task to finish, then call exit once it completes successfully.

However, I realized that I don't have to lose that. I usually run default targets (so I was waiting for a command window to load just to type three letters: ant). Since I usually don't edit build.xml files, I could safely configure Windows Explorer to call ant in the appropriate directory whenever I double-click it:

  1. Navigate to the ANT_HOME directory and create a file called ant-noargs.bat. Inside of that file, add the following contents:

    @if %~nx1 neq build.xml start "" "C:\Program Files\Textpad 5\TextPad.exe" "%~f1"
    @if %~nx1 equ build.xml call ant
    @if errorlevel 1 pause

    The filename check is so this only applies to 'build.xml' files and so I load my default text editor for all other files. The additional error level check exists so I don't have to look at the command window at all to see if anything went wrong -- it will automatically close when it's complete and will only stay open if something went wrong.

  2. Navigate to a directory containing a build.xml file, right-click on the build.xml file, and select Open With... from the context menu. If extra options show up, select the Choose Program... option.

  3. Use the Browse... button and navigate back to the ant-noargs.bat created earlier. Confirm the selection. When returning to the dialog, hit the checkbox which reads Always use the selected program to open this kind of file and confirm by hitting Ok.

As a result of the above, deploying the extensions environment, hooks, themes, and portlets are still a double-click away (or an Enter key away if I'm currently using a keyboard to navigate in Windows Explorer) without ever loading an IDE.

Pre-Compiling Liferay JSPs

Company Blogs 21 avril 2008 Par Minhchau Dang Staff

A feature I recently discovered was the ability to precompile all the JSPs in Liferay. The motivation behind this discovery was the need for sanity checking after making a small change to nearly a hundred JSPs across many different portlets.

After using it on this particular problem, I wanted to know if it was possible to apply pre-compilation to a current ongoing project.

In this project, the development cycle involves testing changes on a development environment, generating a clean build from source and deploying to an existing Tomcat instance, promoting those changes to an alpha environment, validating the changes again in the alpha environment, promoting those changes again to a beta environment, and validating the changes one more time in the beta environment. This process needs be completed in a two-hour window, and so I wanted to use the pre-compilation process to reduce the time spent waiting for page loads.

So, the first question is, how do you setup Liferay for pre-compiling JSPs?

The built-in approach is to set your jsp.precompile property to on in your In this scenario, Liferay's JSPCompiler will precompile each file individually, avoiding potential out of memory exceptions resulting from compiling large numbers of JSPs. It also has the nice property of aborting as soon as the first compilation error is encountered, making it very friendly for sanity checks. However, in a clean environment, it takes ten minutes to run to completion, making this practical only for unattended builds and sanity checks.

Therefore, a faster option was necessary. After digging into the documentation for Jasper, I discovered that faster options are available if you modify the build scripts being used in your development environment. The sequence of changes that were made in my local development environment are discussed below.

One option becomes available if you set your ANT_OPTS environment variable to give Ant a lot of memory. Once you do this, you can modify the appropriate build file (build.xml in portal-web or build-parent.xml in ext-web) to take advantage of the extra memory by calling the javac ant task directly in the compile-common-jsp task. In a clean environment, this shortens the pre-compilation time to three minutes, making this solution attractive for general deployment.

In the example


But I was pretty sure I could do better, since over a minute was being spent decompressing JARs. By modifying the build target specific to an application server (for example, compile-tomcat), you remove the need to decompress JARs prior to precompilation, as you know exactly where the JARs are found and can setup a simple classpath accordingly. In the particular example where your application server is Tomcat, this step reduces the JSP pre-compilation time in a clean environment to one minute, making this solution attractive for Liferay core and extensions development.

In the example


In the process of creating an application-specific build target for the Liferay WAR, it's possible to take this one step further and create a full build script which compiles every plugin WAR. However, since the plugin build scripts involve hot deployment rather than direct deployment to the application server, this is a bit tricky, as the work and webapps folders may go out of synch in terms of timestamps, thus negating the performance benefit of pre-compiling your JSPs if this lack of synchronization results in a recompilation.

Therefore, in the example build script (which makes use of the Ant-Contrib library to iterate through all the plugins folders, and is available for download here), it is assumed that Liferay and the various Liferay/JSR-168 plugins have already been deployed to Tomcat's webapps folder, and the build script and related files are inside of Tomcat's webapps folder. Running ant clean will wipe the work directory, and ant all will compile the ROOT context, followed by all the different plugins.

So, I now have a script which runs in less than two minutes which generates all the appropriate Java code and class files for the JSPs in Liferay along with every plugin in the development environment, speeding up the validation process.

Since these modifications depend on Liferay being deployed to the ROOT context, it can't really be committed to core without some modifications. Still, hopefully this knowledge is also useful to someone else.

Liferay Code Format

Company Blogs 26 décembre 2007 Par Minhchau Dang Staff

After reading through Jorge's blog post on the guidelines for Liferay contributions, and after following the link to the style guidelines on the Liferay wiki, I put together a configuration file which works with Eclipse's built-in code formatter to adjust the whitespace in your code so that is compatible with the stated Liferay guidelines.

To use this configuration file, go to your Eclipse preferences dialog. Check under Java > Code Style > Formatter (screenshot), and then Import the above linked file. A new profile called "Liferay [user]" will then be made available (screenshot). Finally, click the Apply button to set it for the current workspace.

Once this configuration file is imported and applied to the current workspace, every time you run Source > Format, Eclipse will automatically adjust the whitespace in your code to make it conform to the stated Liferay guidelines. To bulk reformat, you can right-click on any given folder while in the Navigator view (or Ctrl+Click on the folder under OS X), and select Source > Format.

I don't know of any tools which help satisfy all of the Liferay style guidelines not pertaining to whitespace. However, based on its documentation, the commercial version of Jalopy from TRIEMAX software appears to come very close with its sorting capabilities.

Eclipse Open Resource Dialog

Company Blogs 20 novembre 2007 Par Minhchau Dang Staff

In Eclipse, everything is a plugin, including the IDE itself. So, if there's something which bothers you, all you need to do is replace the plugin with something which works in a way more consistent with your personal preferences, or if the plugin is open source, tweak the existing plugin to suit your needs.

In my case, what bothered me was this: I don't have any reason to look at .svn-base files or .class files, so I'd prefer they not show up. Playing with working sets doesn't solve the problem, because (as far as I know) there's no way to exclude .svn folders from the working set without adding all the files individually. I tried to add extension points, but I could never get the filters to show up. So, all other options exhausted, I went looking for a way to modify the source code.

In order to modify the "Open Resource" dialog, I needed to find the FilteredResourceSelectionDialog class found in the org.eclipse.ui.dialogs package in the org.eclipse.ui.ide Java archive. A bandwidth-intensive way to get the source code for that one source file was to download Eclipse Classic, and find it in the found under org.eclipse.ui.ide in the org.eclipse.platform.source folder.

I then overrode the matches method in FilteredResourceSelectionDialog.ResourceFilter to exclude all file names ending in .svn-base and .class, created a quick build file to include all the Eclipse plugins jars in my classpath, compiled, copied the resulting class files into the appropriate Java archive, and got this: a clean view showing only the files I might want to open.

Portlet Ids Bookmarklet

Company Blogs 2 novembre 2007 Par Minhchau Dang Staff

The nice thing about Liferay's drag and drop layout is that you (usually) don't have to remember any of the portlet ids. At least, until you do.

If you skim through, there's a set of properties all prefixed with default.user.layout which let you control this. However, it wants portlet ids, which Liferay doesn't readily give to you unless you dig through /portal/portal-web, /ext/ext-web, and /plugins/portlets.

So, I wanted to create something which converted a layout, like this into a quick reference for all the different portlet ids, like this, Without digging through the backend and/or mousing over the configuration links.

So, while I'm not familiar enough with Liferay to implement a server-side way to do this, I dug through "view page source" and found that it was pretty straightforward to write a bookmarklet which, when run on a Liferay-based page, gives me the name-to-id mapping for all the portlets currently on the page in a Liferay.Popup.

To use it, convert the script below into a bookmarklet (you can use use this one which turns up in a Google search) and paste the bookmarklet into your address bar. Voila! Instant popup containing the portlet ids.

Affichage de 12 résultat(s).
Items 20
de 1