Recent Bloggers

Sandeep Sapra

2 Publications
17 novembre 2017

Zeno Rocha

17 Publications
7 novembre 2017

Yasuyuki Takeo

3 Publications
5 novembre 2017

John Feeney

1 Publications
3 novembre 2017

Gregory Amerson

30 Publications
3 novembre 2017

Minhchau Dang

13 Publications
3 novembre 2017

Petteri Karttunen

5 Publications
30 octobre 2017

Alex Swain

2 Publications
27 octobre 2017

Jamie Sammons

10 Publications
23 octobre 2017

Jan Verweij

2 Publications
23 octobre 2017

Updating and deploying your code made easy!

General Blogs 26 janvier 2015 Par Marcus Hjortzén

Are you developing a large amount of plugins?
Ever found yourself frustrated with having to manually deploy war-files to development, test and production (clustered?) environments? When performing Quality Assurance on a release it is important that it is the same build / release that is eventually deployed to production. This might sound like an easy task but there are many traps out there if you're not careful.
This blog post is about how we improved our release system with the help from Maven and a custom portlet. Performing releases is now just a click of a button!

Our system

Jenkins - builds our source code, polling subversion regularly. Previously also contained jobs to deploy war-files into various systems.
Artifactory - A Maven repository where artefacts built by Jenkins are stored.
Liferay - different environments using Liferay EE. Environments like development, test, production (live) and production (backup)
Update portlet - a portlet written by us that enable us to update to specific release versions


In our environments we used a job in Jenkins that retrieved the war-file (from appropriate source, more on that later on) and transferred the file to a specified environment. The job was configurable and you had to choose:
  • What plugin (one of 30'ish plugins available)
  • What environment to deploy to (test, production live, production backup, development etc)
That meant that if we wanted to deploy 5 plugins in a release we had to run the job 5 times for each server, in our case with test and two production machines that meant 15 times. With 10 plugins 30 clicks, where 10 plugins isn't very uncommon for us. Doesn't sound like a big of a problem but the crucial part is that you have to deploy the exact same version for each machine, anything else may prove disastrous.
We did that for a couple of years.


At first we handled all build artefacts within Jenkins and the job mentioned to deploy war files always accessed the latest artefacts from the release build job. That had a couple of drawbacks, for instance it was very difficult to work in feature branches / doing critical patches based on earlier releases.
As the first step towards a solution was to start using Maven. With Maven we were always working on a major version and other minor versions could be in progress in parallel branches. For instance if a critical patch had to be done, a change to the release-branch and a bump in minor version solved that, without affecting the current major version change being done and evaluated in the main trunk.
Just to clarify. By using Maven, artefacts were being published to a central (internal) repository. Each published release version is stored there, forever, and cannot be changed.
What did we gain by this?
Earlier we didn't really know what artefact was being released to a system since the version was always latest built artefact from the Jenkins release job. In the case of a developer triggering the release job in Jenkins - a new version would be generated and not necessarily the same as previously was installed in test.
Now we have full control over the versions being used. There were also other benefits from turning towards Maven (we won't go back, ever). A choice of many IDE's for instance (since most support Maven fully).
But we're still stuck using the Jenkins job to deploy each artefact to each system, the only change is that the job is downloading the artefact from our repository with a specific version in mind.

Sunshine, beaches and happy days

(Or. Enter UpdatePortlet.)
So. After much thinking (ahem), pondering, donuts and coffees we had an idea. What if ... there was a way ... to see if there was an update available?
(Yes, the new cloud services from Liferay has stolen our idea regarding patches)
Stretching it even further .. what if we could actually update to that version, with just a click of a button? 
Said and done.
A portlet listing all the currently installed plugins, accessing the version information from each plugin and displaying related information in a pretty UI (datatables really rock) was created.
Using the portlet information (like artefact id and group) we access our Maven repository and pull the latest version information, comparing that to the version installed locally in Liferay.
If a version differs from the latest available we also display a button for updating the plugin and mark the row in a distinct colour to make it easier for the administrator to find any stale plugins.
Since our Maven repository also provides meta data regarding SNAPSHOT versions (that is, versions that have not yet been marked as a release version) we could also detect if there was a newer bleeding-edge-version available.

Deploying a bundle

We're still stuck with clicking 15 times. Or 30. The more obvious risk now is that we might click 6 times instead of 5 if there was a plugin in a newer release version but was not meant to be deployed at this point in time. Administrators are of course trigger-happy when seeing that updates exists even if these shouldn't be updated quite yet.
The need for deploying a fixed set of plugins with specific versions became apparent, and by using the excellent architecture of Maven the solution was pretty neat.
By creating a maven artefact without any real source, just a pom.xml-file, we could list dependencies towards other plugins (and their specific version). The pom (named inhouse to deploy-pom) could then be processed by the Update Portlet which installs each plugin (even in the specified order) with just one-click-interaction with the administrator. The deploy-pom also has its own version which comes in handy, we're using it to describe the sprint we're deploying (we're using 3-week sprints, currently named 15.05, thus is the deploy-pom's version is named 15.05).

End result

By selecting a release (deploy-pom's version) and clicking install we're done. 6 clicks (including install-button-click) is all we need for all our environments and we're guaranteed that exactly the same amount of plugins have been deployed and also the same version!

Login tokens on Github

General Blogs 21 janvier 2015 Par Marcus Hjortzén

I previously wrote a blog entry regarding Liferay Sync and ways to authenticate the application even when using SSO (and not really having access to the SSO-password).

Apparently there was interest in seeing the code and hopefully using it. I published it on Github under my employer's organization (Uppsala University). 

The code is really written as a proof of concept, there are still things that would need to be changed. That being said, the code seems to work very well so its a good base to start from.

  • UX. Or UI. Really. Right now it's just a clob of text, a table and a bunch of buttons
  • Authentication before generating new tokens. To prevent someone hijacking your browser for 10 seconds when you're looking away thus getting a private password to your account.
  • Stop validating that the expando field has been created at each access

Those are the things I could name just on top of my head. Anyway, the code is free to use and if you want, please contribute!


Liferay Sync in a portal using Single Sign On

General Blogs 15 janvier 2015 Par Marcus Hjortzén

Why SSO?

Using single sign on (SSO) is probably something that more and more systems aim towards. A large benefit is improved user experience, perhaps not so much for publically available sites. These sites generally attach to Facebook, Twitter or Google and await the crowds going 'aaaaw' in appreciation of the the techniques used.
No, instead, imagine a portal used internally by an organization. Most probably the users also login to other web applications. Many.
Where I work a typical administrator log into tens of internal web applications each day. They add up quickly: webmail, liferay portal, HR-system to note your working hours etc, etc.
This is where SSO really shines.
But lets take a look at the Liferay portal. There are many ways of solving single sign on (like CAS and other plugins in Liferay or by using Shibboleth with the help of proxies filtering Tomcat), the one thing they have in common is: No password is ever stored locally in Liferay.
Well, that's not really true, there has to be a password. But it's not a password used by the user, most probably it is an auto generated field. The local password isn't really important in the case of authenticating a user logging in to the portal, as long as we trust someone else telling us that the user is authenticated. In CAS this is done by validating a ticket, in shibboleth by looking at extra headers in the request. 
But there are some cases where the password plays an important role.
One is when the SSO framework is offline or simply doesn't work. Make sure you have an admin account that DOES have a local password otherwise it might be difficult to administer your portal in these cases! :)
Another is when using remote services generated by service builder, using webdav to access documents stored in your sites or similarly, when using Liferay Sync to replicate your documents to many devices! In these cases a local password is needed.

So, what are the options available to you as a developer in these cases?

If there was a way to insert your own authentication then perhaps you could verify the user yourself perhaps by binding towards an ldap or other.
I set out with my debugger trying to find a good extension point for the Liferay Sync method calls. Liferay sync uses the json-webservices for its interaction with the portal and currently uses preemptive basic authentication, which means that the username/password is sent in the request (hopefully over a secure transport layer).
After a lot of trial-and-error hacking with autologins, authenticators and extensions to the servlet filters I found a very easy path to success.

Enter UserLocalService.

Since Liferay Sync uses json webservices and basic authentication we will eventually end up in the UserLocalService. There are methods like authenticateByEmailAddress, authenticateByScreenName and authenticateByUserId that are being used (depending on portal-properties setting) to validate the password of a user.
By creating a simple hook-plugin and overriding these methods you're able to provide your own authentication! Perfect, just what we need! By overriding these methods you're also effectively changing the standard (http) login authentication but since Shibboleth intercepts those redirects the users won't be able to trigger your authentication code by logging into the portals web UI.

But what to authenticate against?

The SSO-password is of course a much valued resource. If decrypted a hacker would be able to access a large amount of systems. Storing this password on any device is probably not a good idea in the case of the device getting stolen, hacked or similar.
My idea was instead to generate tokens to be used per device and also individually revokeable. That way, if you loose your iPhone you can revoke the access for Liferay Sync for that phone only and not have to reset your SSO-password in all systems.
I solved this by creating a small portlet displaying all the currently generated tokens (really, just the description. The token itself is only shown to the user once, at creation, then encrypted) with the option to revoke. Then also a possibility to generate a new token with a given description. The token is shown to the user once, then encrypted into an expando value on the User object.
By using this expando value the UserLocalService-hook can authenticate the user thus enabling Liferay Sync to access user documents!


Very related to Liferay sync is of course webdav. That is, a way to map a folder directly from your desktop (Document and Media Portlet happily informs you of this all the time) from different operating systems. There are a couple of drawbacks with webdav however, for one not all OS'es support the protocol entirely. For instance from a mac you're able to read documents but not upload new. From Windows 7 32bit it kind of works, but in 64 bits it doesn't. Or was it the other way around? Or was it with ipv6? 
Anyway, lots of quirks. And! Authentication over webdav commonly use digest auth for authentication, there's support for this in Liferay but of course much more complex! It could probably be done, at least by copying source code, but given that the OS-support is very limited we're focusing on the Liferay Sync app instead.


Feature toggling your code

General Blogs 19 novembre 2014 Par Marcus Hjortzén

Feature toggling

Have you ever had the delicate problem where you've made a new feature that the customer doesn't wish to activate quite yet? What do you do? One option is of course to comment out the sections and commit the source change. Another is to never commit the feature to version control until the customer has verified that he/she wants it in production.
In the end, probably the best way to do it is to incorporate the code into production code but with a small boolean making sure that the function isn't active. That way the code doesn't have to be maintained separately and any bug introduced by new code should also be discovered quite early.
This method is usually the recommended in projects with short sprints or even continous delivery.
So, trying to apply this in a real world scenario (i.e. the portal I'm working on) one gets frustrated in one or all of these scenarios
  • Customer wishes to review the function, just as you added the if (false)-statement and deployed
  • Customer also wishes activate the function in production environment, but no down-time (even 1 minute for redeployment of code) is accepted
  • Customer didn't really want the function - remove it again! (or statement somewhat differently: there's a bug!)
We needed a way to be able to feature toggle functions without the hassle of redeployment, compiling of code or changing and restarting the servers!
Enter FeatureToggle-portlet.


Feature Toggle Portlet

So, the requirements are pretty simple. Make a portlet that can answer one simple question: "Is feature XYZ enabled?"
It should also be able to change the state of a feature, back and forth.

Happy coding days

With this in mind, happy coding days had arrived, get the keyboard and open up the favourite editor!


The service is very simple, it models a Feature that has feature key, a boolean state, a description and also, for scoping functionality, a group id.


The portlet needs to present the user with a UI where all features in the system are presented and with a way to toggle their state (active/inactive). The portlet needs to understand its scope and use global scope if shown inside control panel, otherwise use the group id of the site if added to a site layout. This way, a feature can be toggled to on state in one group, off in all others and not shown at all in the global scope.

Message bus API

Besides using the Service API there should be an easy way of asking the question "Is feature XYZ enabled", without worrying about class loader problems or CLP. Even the service API is binding the clients too tightly, any change in the service API would affect the client eventhough the client still just wants an answer to the same question, is the feature enabled?
Using message bus for this loosely coupled API seems like a neat idea, especially since the only coupling done is sharing a message destination string.


Going out to the bus each time we want to check a feature seems like wasting a lot of resources, especially when a feature is being used by all pages in the portal (yes we have a couple of those features), so a cache is a really good idea. Now, each plugin cannot implements its own message bus cache, or at least, we will soon have 20 duplicates of the same code. So, with the strength of Maven a common project with a helper that caches any previous responses (and clears the cache upon state change)


With the portlet in place and a helper class in a common library we're all set to use it from code in all plugins within the portal! Our portal consists of roughly 30 plugins, not all of them needs to feature toggle their code of course but once in place, it is surprising how often it is being used. At least for a short period of time!
The helper class isn't doing much magic, however it does cache any responses in a single vm pool. This cache value can be reset from other side of message bus as well, thus being able to invalidate the cache upon state change.


Once the portlet was written a lot of code got feature toggles, some because we wanted to try out new functionality in live environments but only by enabling it to a specific set of users/sites. We've also used it to enable/disable whole pages for our sites (~1500 sites) that are otherwise controlled by an active site template propagation. This way the user can choose if he/she wishes to use blog functionality (without necessarily having to access control panel).

Our code is mostly generic and could probably be used by others. There are some minor specific implementations. Give me a shout if you wish to see the code or use it!


Fast development using Compass/Sass and Liferay Portal

General Blogs 20 janvier 2014 Par Marcus Hjortzén

We're moving towards writing all our styles using Sass. Since we have a lot of styles already in place in different themes and plugins this means a lot of copy-rewrite-test. Compiling and deploying a theme each time you write a new rule to test it is taking a lot of time which was why we tried to get a quicker roundtrip.

This is a write-up of how we got Liferay Portal to work with an external Sass/Compass compiler, giving us auto deployment of a Theme. Our enviroment is OSX but any *NIX should work. Windows is largely dependent on a proper shell.

First out: Liferay/

To bypass caching, thus enabling reloads from disk, we have to include the properties listed in the Either by including this file or by copying some (or all) of the properties into your

Second: Install a Sass/Compass compiler

Install a compiler that automatically compiles your Sass/Compass code whenever a change occurs.gem install compass

Third: Configure Compass and tell it to watch the theme directory

Compass watches a directory for changes and automatically compiles the .scss files. The output directory should be your plugin installation directory. That is, /liferay-portal-6.1.30-ee-ga3/tomcat-7.0.40/webapps/my-theme/css
Generate a configuration file in your plugin root:

http_path = "/"
css_path = "/.../liferay-portal-6.1.30-ee-ga3/tomcat-7.0.40/webapps/UU-universal-theme/css"
sass_dir = "src/main/webapp/css"
Run compass and tell it to watch this directory. 
compass watch
NOTE! The Compass compiler only reacts to files named .scss or .sass!
Liferay on the other hand only compiles files that end with .css! This is a big issue for developers without a proper shell. In my case I created a link with the ln command. I would really like to see a way for Liferay to process the .scss files in the compile stage in a future release and not just .css.
In my case, whenever a change is detected in the sass_common.scss-file (or any file it imports) a compiled version will be installed in my tomcat/webapps/plugin/css directory. Since Liferay is running with developer settings this change will be loaded at the next browser reload.
With this setup we're able to quickly add new rules from an old theme, rewrite them into Sass-style and easily verify that they are correct. It doesn't sound like much but this literally saves us hours per week!
Affichage de 5 résultat(s).